Practicum Projects

The practicum provides an opportunity for students to address a real-world analytics problem faced by the sponsor corporation or agency.

Students are assigned a Practicum Project in the fall semester to apply their knowledge and practice project management, team work, exploration, and leadership. This is where the rubber meets the road, where students experience first hand the challenges of asking the right questions, obtaining, cleaning and preparing big data. Students work on their projects throughout the Fall and Spring semesters, regularly meeting with industry partners. The Practicum concludes in the Spring with a final presentation to the program and industry sponsor. 

Recent projects have looked at modeling the opiate crisis, advanced supply chain management, educational success analysis, lending modeling, population health and health behavior, and predicting financial institution performance.

2019 Practicum Projects

Title:  Identifying Traffic Wind Gusts: Determining predictability of roadway impacts by type. 

Scope / Research Question:

Utilizing scout generated data, Avant Course would like to differentiate natural wind readings for those generated from traffic. Additionally, the team is examining the potential of predicting scout readings, with their sensor-based data collection tool, using macro-level weather data. Currently, there exists no consistent method for identifying traffic wind

Methods:

We utilized exploratory data analysis collected from the scout data. Modeling will utilize various machine learning algorithms including logistic regression, random forests, and XGBoost. 

 Scope/Research question

Our UNH team focused on wind data collected by the Scout to identify a phenomenon know as urban canyons within the greater Boston metropolitan area. For the purpose of this project, an urban canyon is defined as a segment of any street where the physical environment helps channel and amplify street level wind speeds on a consistent basis. Identifying urban canyons may be of particular interest to fleet managers, as high wind areas may subject vehicles to increased air resistance and negatively impact gas mileage. If urban canyons can be identified, they could potentially be included in route optimization algorithms that help fleets operate efficiently and minimize costs.  

Methods:

  • Regression splines (LOESS) were used to estimate wind speed at various weather stations on a per/second basis. NOAA weather data is only available on a per/hour basis
  • Smoothing splines were used to negate the impact of wind gusts in street-level Scout readings and estimate missing values. 
  • The team utilized OSMNX,  a python library that represents street maps as a network graph, to break the road grid into discrete segments separated by intersections. 
  • Predicted wind values for NOAA weather stations were compared to smoothed road level data for each road segment for which the team had data. 

Title: Optimizing Financial institutions’ Reactions to Changes in Federal Interest Rates

Background: Financial institutions make money off the margin between Federal Bank’s rate and their respective consumer interest and business loan rates. It is difficult to know how much to adjust their rates with a fluctuation in Federal Bank’s rates.  After an unprecedentedly low rate environment which lasted for several years, deposit rates are starting to increase as we move further into a rising rate environment.

Research Question. Should the Federal Reserve continue to raise rates, what will the impact be on financial institutions’ interest-bearing non-maturity deposits?  In different market places, how does a bank’s reaction to the Federal fund rate affect their ability to maintain and grow deposits?  

Scope: The analysis used publicly available FDIC call sheet reports, and included financial institutions that were active between 2000-2018 and with total assets less than 20 billion dollars.

Methods: To address this question, the team utilized a number of techniques including exploratory data analysis, feature engineering, outlier analysis, clustering analysis, Autoregressive Integrated Moving Average forecasting techniques, and deep learning models.  The deliverable will be a well-documented ensemble system of rate optimization. It will be implemented in R and delivered in a PowerBI dashboard. 

Team:

Hammond, Jessica, Guin, Monit, Blauvelt, James,  Sharma, Shatrughan 

Title: Predicting whether banks will fall below "well capitalized"

Scope / Research Question:

Darling Consulting Group would like to identify banks at risk of falling below “well capitalized,” which occurs when a bank’s leverage ratio falls below 5% or total capital ratio falls below 10%. Banks who fall below these marks are subject to certain regulatory restrictions, and therefore Darling would benefit from having any advance knowledge of banks in danger of these thresholds. The project’s goal is to project banks’ future respective leverage ratios and total capital ratios based on publicly available FDIC data.

Methods:

To address this question, the team utilized ARIMA time-series modeling and LSTM modeling, along with traditional machine learning modeling.

Project Scope: Identifying server midrange storage capacity is often difficult, forcing data professionals to scramble to address critical business interruptions because of insufficient disk storage. The challenge lies in accurately identifying when systems will run out of storage space - which can have significant consequences for Liberty Mutual Insurance Company, slowing data delivery or bringing business operations to a halt.

Methods: In addressing the challenge, the project team applied data cleaning techniques to identify monthly usage for midrange storage volumes across servers.

Findings: Intelligent reporting will be implemented via integrated descriptive dashboards enabling data professionals to proactively manage compute resources

Team: Eric Dorata, Anna Kot, Ben Forleo, Mark McComisky 

Title: Eliminating Manual Review from a Smoking Alert System

Background: FreshAir Sensor is a company that helps its clients maintain a clean environment, detecting cigarette and marijuana smoke with their proprietary chemical sensors.  Their market is providing detection services to hotels and other establishments.  

Research Question: Freshair Sensor currently employs a process that requires a manual data review for every case of suspected smoking particulates via their sensor technology. This process requires an employee to be on call at all times. At present, FreshAir devices collect data from 5 internal, high-frequency sensors.  Once an anomaly is detected, the event data is sent to a manual reviewer, who makes a decision on whether the alert was caused by smoking or not.   FreshAir Sensor receives about 7,000 sensor alerts each month and each alert takes about two minutes to manually review. This project seeks to examine detection recognition algorithms so as to reduce the number of alerts manually reviewed by employees by implementing a machine learning system to better discriminate between smoking events and other anomalies.

Methods: To reduce the number of alerts that need to be manually reviewed, the team will provide an ensemble of predictive models. The ensemble will identify alerts that do not need to be reviewed (having a high degree of confidence of smoking or not smoking). Algorithms that contribute to this ensemble include a Support Vector Machine (SVM), a Random Forest, an Extreme Gradient Boosting (XGBoost) classifier, and an Artificial Neural Network (ANN).

Team: Ben Weckerle, Manoj Virigineni, Jen Legere, Alicia Hernandez

Title: Sensor Anomaly Classification with Deep Learning 

Research Question:  Can the number of manually reviewed events be reduced while simultaneously letting a minimal number of smoking events go undetected?  

Methods: To reduce the number of alerts sent to manual review, and to maintain a high level of accuracy, both feature-based and deep learning approaches were explored. A convolutional neural network produced the best results and will allow FreshAir to more accurately triage events that are sent to be manually reviewed.   

Team: James Blauvelt Brennan Donnell, Sam Isenberg, Joanna Grory

Title:  Predicting High Cost User Groups and Identifying Beer’s List Prescriptions

Background: Martin’s Point cares for a large population of Medicare and elderly patients in the practices and through their coverage plans. 

Research Question: The task is to identify any individuals currently receiving care from Martin’s Point who may have been prescribed medications that are currently flagged on the American Geriatrics Society Beers Criteria for being inappropriate medication for older adults. 

Methods: Utilizing Martin's Point Generations Advantage medical and pharmacy claims data as well as the current version of the American Geriatrics Society Beers Criteria, the team utilized a number of techniques including data mining, feature engineering, outlier analysis, descriptive analytics, and data visualization.

Team: Phoebe Robinson, Jeremy Dickinson, Bayleigh Logan

Title: Understanding High-Cost Users

Research Question. Martin's Point faces a highly skewed distribution of cost, with 9% of the population accounting for more than 50% of the total cost. The purpose of this study is to understand the main clinical groupings of individuals covered by Martin’s Point Generations Advantage insurance that incur high-costs.  The task was to segment members based on the total cost and identify the movement of members between the cost segments over a three-year period and to provide additional insight into the characteristics of high-cost users.

 Methods: We here utilized Martin's Point Generations Advantage medical and pharmacy claims data.  Members were required to have at least 3 years of continuous enrolment during the sample period (2014-2018). To address this question, the team utilized a number of techniques including data mining, feature engineering, outlier analysis, clustering analysis, stochastic modeling, and data visualization.

Team: Jessica Hammond, Monit Guin, James Blauvelt, Sharma Shatrughan.  

Title: Identifying factors that drive successful referrals for managed care.

Scope / Research Question: Martin’s Point has a predictive modeling tool that uses patient data to refer patients for managed care, but it greatly underperforms compared to caseworker referrals. This low success rate diverts resources away from contacting and enrolling the patients that will be eligible for managed care, leading to unnecessary costs and potentially to negative impacts on member health outcomes.

Methods if applicable: Using de-identified member data, we addressed the question of why members are or are not meeting criteria for referral. We used this information to try to address the low referral success rate of the data-driven referral model. The specific methods used to address this question include random forest models, support-vector machine models, and LSTM neural networks.

Team: Jared Fortier, Jen Legere, Amy Chang, Neha Narla

Title:  Influencing litigation strategies in the insurance industry 

Research Question: Riverstone has a large number of re-insurance claims that have the potential to be brought to court. Our task was to identify additional data points that could aid in developing predictive models to better direct claims mitigation resources.

Scope: The analysis utilizes court case APIs from various asbestos claims across a number of states.  

Methods: We utilized a number of techniques including data mining, JSON parsing, feature engineering, tagging, regular expressions, machine learning, and data visualization.

Team 1: Matt Heckman, Monit Guin, Dishyant Kumar, Sam Karkach

Team 2: Phoebe Robinson, Nick Zylak, Jared Fortier, John Gagno 

Title: Predicting end target for incoming emails

Scope: We have been tasked with accurately predicting the end target (department) of emails given a set of information about the email and including the email. Information like; Text body, Subject Line, final target destination, product(s) which a client owns, etc.

Methods: Team 1 utilized Tf-IDF vectorization and XGBoost modeling to try and predict the goal target of the email. Team 2 utilized many text analytics techniques, including word clouds and text2vec. We used a TFIDF vectorizer and applied several machine learning algorithms to predict the target inbox. Some of these algorithms included logistic regression, random forest, and linear support vector classification.

Team 1: Chad Lyons, Dan Konig, Viraj Salvi, Dushyant Kumar 

Team 2: Bayleigh Logan, Sam Karkach, Frawley Barton, Mitchel Friend

Title: Predictive Analytics: A targeted approach to student retention and success

Background: Affecting university rankings, school reputation, and financial well-being, student retention has become one of the most important measures of success for institutions of higher education; with freshman attrition steadily remaining at 30% at Plymouth State University. 

As students have increasing options for educational and career opportunities, Plymouth State University engaged the University of New Hampshire to understand the causes behind freshman attrition, how to accurately predict at-risk students, and appropriately intervene to retain them.

Methodology: Using six years of institutional data along with relevant data mining techniques, the project team developed analytical models aimed to predict freshmen student attrition, including attrition likely from low academic performance. Models included regression, random forests, support vector machines, and gradient boosting. Variable importance analysis of the models was conducted to identify what factors are most important among predictors affecting freshman attrition.

As a result of the analysis, incoming first-year students can be placed into one of four targeted cohorts based on their predicted likelihood to leave Plymouth State University. Thus, enabling the University personnel to examine risk and recommend proactive advising approaches.

Team 1: Devan Miller, Anna Kot, Ben Weckerle, Brennan Donnell

Scope / Research Question: To identify students is graduate or not, based on several factors, predict a student is going to finish his study in Plymouth or not.

Methods: Data mining and controlling for imbalanced classes.  A variety of analytical tools including Support Vector Machines, Random Forests, Logistic Regression Modelling, XG Boosted models were employed.  

Team: Frawley Barton, Maz Hejazidahaghani, Amy Chang, Jiale Zhao

Title: Assessing the impact of World Bank interventions on the health of the Caribbean Large Marine Ecosystem and the prosperity of human populations that depend on it.

Scope / Research Question: In 2002, the World Bank funded a Caribbean-wide project to address three major areas of concern in the Caribbean Large Marine Ecosystem (CLME): Unsustainable exploitation of fish and other living resources, habitat degradation and modification, and pollution. The project was evaluated upon completion to assess its success in meeting the primary goal: to help Caribbean countries improve the management of their shared marine resources through an ecosystem-based management approach. However, the Global Environmental Facility lacks information about the project’s broader impact on the health of the CLME and the socio-economic situation of the human populations that depend on it. This project addresses that information gap by analyzing pre- and post-intervention trends in multiple metrics of ocean health and human welfare.

Partner / Team: Independent Evaluation Office of the Global Environmental Facility of the World Bank – Blue Economy Projects

Team: Joanna Grory, Jon Bieniek, Heather Frechette, John Cagno

Methods: Web scraping for information about marine-related laws passed by 32 Caribbean countries, clustering analyses, and regression analyses.

Project Scope: Reviewing and synthesizing government reports is often difficult, forcing evaluation officers to allocate extensive time and resources to summarize key themes due to the lack of standardized reporting as well as lengthy reports. The challenge lies in accurately identifying which sections of the report are relevant findings – which can have significant consequences for the Global Environmental Facility because it requires a significant amount of labor hours and it is subjected to human error.

Methods: In addressing the challenge, the project team applied text mining techniques to identify relevant sections of the reports and developed a Shiny application for a proof of concept multi-summarization documentation tool.

Team: Alicia Hernandez, Devan Miller, Eric Dorata, Shatrughan Sharma 

2018 Practicum Projects

Inpatients falling at Elliot impose a heavy penalty on the hospital and even more importantly are a major catastrophic event for patients. Elliot tracks these falls and uses the John Hopkins fall risk assessment tool (JHFRAT) to categorize a patient’s fall risk level. Our aim is to assess the performance of the JHFRAT at Elliot and elicit further patient information to be used in conjunction with machine learning methods to improve the tool’s accuracy.  

Team members: Nisha Muthukumaran, Meseret Tekle, Daniel Walsh, Steven  Glover, Julia Vaillancourt, Brandon Epperson

Team members: Patrick Kispert, Jacob Daniels, Christine Hanson, Sarah Brewer, Caroline Lavoie, Nemshan Alharthi, Serina Brenner

The largest provider of group disability in the nation is looking for improved ways to more accurately model risk at the client level. Leveraging third-party data, patterns can be identified in historically good and bad risks to develop a model that better predicts future risk performance.

Team members: Philip Bean, Joy Lin, Brandon Bryant, Sarah Brewer, Gowri Neeli, Olufisayo Dada

Martin's Point Health Care (MPHC) provides healthcare services as both an insurance carrier and as a medical provider. MPHC is interested in increasing its “Overlap” population. “Overlap” is defined as someone enrolled in one of Martin’s Point insurance plans and also receives medical care with Martin’s Point Healthcare.  The primary question addressed was: How can MPHC increase their overlap population to provide a more comprehensive healthcare experience?

To examine the team utilized clustering techniques, artificial neural network (ANN) models and developed an interactive dashboard of the populations of interest.

Team members: Kim Lowell, Caroline Lavoie, Gowri Neeli, Thomas Cook, Suzannah Hicks, Michael Gryncewicz

IR

Like many other Universities, the University of New Hampshire is concerned with increasing enrollment yield.  Yield is the percentage of admitted students that make a choice to attend the University.  To examine factors related to student yield, the team used a number of analytic techniques during the course of this project. Cluster analysis was utilized to segment students into different profiles across the dataset. For supervised learning techniques we used logistic regression, random forest models, and artificial neural networks for classification purposes to predict whether or not a student enrolls at UNH given that they are admitted.

CaPS

Research shows that the pursuit of a better job remains the number one reason why freshmen choose to attend college. Universities must be increasingly aware of their responsibility to help students attain this goal. A cornerstone of this strategy at UNH is the Career and Professional Success office, which is committed to empowering UNH students to attain the knowledge and skills needed to succeed in their professional lives. While UNH alumni continue to have full-time employment and workplace engagement rates that are higher than the national average, the question remains: Are there things that UNH can do to increase student job-readiness and personal success? This project presents two solutions: 1) the use of predictive modeling techniques in Python and storytelling in Tableau to identify the relationships between student activity and post-graduation outcomes, and 2) the simulation of a non-siloed database containing student data with the power to identify opportunities for intervention and support that can impact student success over time. 

Team members: Connor Reed, Nemshan Alharthi, Olufisayo Dada, Jacob Daniels, Christine  Hanson, Amanda Fakhoury

Avante course machine learning.  How can hyper-local data collection be refined for new product design?  Avant Course set a goal to refine road bump detection methods and implement road classifications in order to identify optimal driving routes for electric and autonomous vehicles. By implementing road classifications, electronic vehicles can extend the range they can travel by avoiding extremely bumpy or unkempt roads. In order to achieve this goal multiple functions were created. Geospatial python packages were used for calculating distance driven, mapping road IDs, and visualizing drives and bumps detected. Using iPhone gyroscope data in order to detect when a phone is being moved by a user rather than reading a bump on the road. Unfortunately, if a user moves their phone without turning it along any axis, this movement would not be detected as a user movement, rather as a car movement. A neural net was run in order to predict where a bump was detected due to user movement. As expected, the model predicted no user movement because the response variable was so sparse. In order to reduce user transmitted data and the amount of data stored, a function was created to limit the number of readings per second. By limiting the number of readings not only does it reduce the amount of data stored, but it ensures a faster processing time when data is used. This information is then used to detect and categorize bumps, as a result the roads are classified into their own categories ranging from safe to dangerous.

Team members: Jolanta Grodzka, Serina Brenner, Michael Gryncewicz, Amanda Fakhoury, Meseret Tekle, Daniel Walsh

Arkatechture is a Data Analytics company that has identified financial institutions (FIs) as a key client group.  Arkatechture wishes to increase its client base by using publicly available statutory reporting information.  Moreover, to date Arkatechture has focused on credit unions (CUs); they would like to expand their banks clientele.

This project is primarily one of data architecture and manipulation as opposed to one of data analysis.  Our task is to facilitate the use of FI data rather than analyze the data to find, for example, under-performing FIs.  This consisted of five tasks.  1. Development of the over-arching data flow.  2. Database development.  3. Conformance and KPIs.  4. Data Cleaning.  5. Data Visualization. 

At each step different tools and techniques were employed.  Amazon AWS and Redshift were utilized for the database component and for KPI construction.  Other tools in Python, R, and Tableau were utilized throughout.

Team members: Michael Shanahan, Brandon Bryant, Katharine Cunningham, Kim Lowell, Jolanta Grodzka

This project explores the NH State Police crash dataset of all commercial vehicle crashes to better understand the causes of commercial vehicular crashes and the role of distracted driving. The National Institute for Occupational Safety and Health (NIOSH) has focused attention on motor vehicle crashes as the #1 cause of work- related injury in the U.S. Currently, NIOSH provides evidence for distracted driving as being a major cause for commercial vehicle crashes.  To explore the characteristics of the at-fault drivers, we applied an unsupervised technique, specifically clustering, to describe the driver behavior. We used a Random Forest model to determine most likely causes of accidents and the likelihood of injury given an accident. 

Team members: Brandon Epperson, Suzannah Hicks, Michael Shanahan, Patrick Kispert, Joy Lin

Darling Consulting Group (DCG) focuses on asset liability management (ALM) services in an attempt to mitigate risk while ensuring financial stability for their clients. This project sought to create a model to predict commercial loan prepayments for small to mid-size banks. Anticipating loan prepayments allows a bank to better plan for future cash flows, structure their balance sheet, and prepare for regulatory oversight. The practicum teams explored numerous methods to predict loan prepayment using a data set including a time series of loan payments from six anonymous regional banks. The predictive methods include artificial neural networks (ANN), recurrent neural networks (RNN), time-series analysis (ARIMA, Exponential Smoothing), and random forest. Additionally, the team explored descriptive details of the loans and the external economic factors impacting prepayment behavior. 

Team Members: Katharine Cunningham, Steven Glover, Julia Vaillancourt, Thomas Cook, Nisha Muthukumaran, Connor Reed, Philip Bean

2017 Practicum Projects

Team members: Joan Loor, John Kelley, Robin Marra, Yitayew Workineh

Description: Granite State College (GSC) was established in 1972 by the Trustees of the University System of New Hampshire as the School for Continuing Studies. The Mission of Granite State College is “to expand access to public higher education to adults of all ages throughout the state of New Hampshire.” To fulfill that goal GSC has five full-service Regional Campuses and three Academic Sites.

This project focused on the identification of factors that affect the prospects for academic success at Granite State College. To that end, GSC provided us with de-identified student data regarding demographics, enrollment patterns, academic performance, and financial aid status. In addition, we created many new features that were important to our analyses. Employing random forests and clustering we identified five distinct groups of GSC students. Subsequent survival analysis enabled us to isolate several factors that either decreased or increased the propensity of students to drop out. Based upon our findings we recommend several warning indicators that GSC can utilize to enhance student retention.

Team members: Jamie Fralick, John MacLeod, Erica Plante, Swapna S

Description: Martin’s Point Health Care’s mission is to provide better care at lower costs in the communities they serve. The team was tasked with developing a new method to predict groups of members who might be at increased risk of experiencing a major medical event, enabling Martin’s Point to proactively reach out to those patients with preventative care offers in hopes of sparing them a future medical event.

Using de-personalized medical and pharmacy claims data provided by Martin’s Point, the team performed an observational study.  Cluster Analysis techniques were used to segment members into several groups with similar claim profiles.  The team then used Survival Analysis over the observed timeframe of the data to assess the likelihood that each cluster would incur a major claim during that timeframe.  An interactive dashboard was created to allow Martin’s Point drill down into the clusters of relatively higher risk of a major claim, and reach out to those members with preventative care.

Team Members: Adetoun Adeyinka, Hailey Bodwell, Richa Kapri, and Mengying Xu

Description: CA Technologies is an international, publicly held corporation that ranks as one the largest independent software companies in the world. CA Technologies creates systems software that run in distributed computing, mainframe, virtual machine, and cloud computing environments. This practicum project focused on developing an analytical model to more efficiently map CA’s sales teams to potential customers. The data utilized to complete this goal was sales-related data, which included sources such as current and historical contract information, client data, and usage activity. A variety of clustering techniques were utilized to statistically group clients based on common characteristics. Within each cluster, optimal buyers of product groups were identified. These optimal buyers were used to develop propensity scores that sales teams are now able to use as a method of targeting products to clients that are most probable to purchase.

Team Members: Logan Mortenson, Shane Piesik, Soumya Shetty
Darling Consulting Group (DCG) is one of the largest asset liability management firms in the United States, helping banks and credit unions manage balance sheets effectively.

Description: This project sought to determine core bank customers versus rate shoppers for specific bank clients. DCG gave us a 16 GB data folder including different files for each month from 2004 until August 2016. Through feature engineering and data transformation, the team was able to manipulate the data in order to find similarities in different customer accounts for this bank. The solution to DCG’s dilemma was two-fold. The team created a Support Vector Machine algorithm in order to predict customer dropout rate from the bank and then used survival analysis in order to determine a safe prediction period of when a customer is going to leave the bank. The information can be used to alert the bank as to when they should reach out to important customers who may be on the verge of leaving.

Team members: Colin Cambo, Pujan Malavia, Austin Smith, Benjamin Tasker

Description: UNUM insurance is a Fortune 500, provider of insurance protection for 33 million persons worldwide.  The primary goal of this project was to create workflow optimization for the thousands of emails UNUM receives per day. A secondary task was to experiment with sentiment analysis to see if sender mood could be obtained in real time. The data set provided contains 2 million emails from two different databases.

The team used Python for all coding purposes and used regular expressions to clean the emails. They removed subject lines, confidentiality statements, etc. This methodology helped the UNUM group classify the emails into several groups at a 70% accuracy. If UNUM were to implement the email model provided, savings for just the customer service department is estimated to be approximately $3 million dollars.

Team Members: Bethany Bucciarelli, Shannon Snively, Minh Ly, Phi Nguyen

Description: The process of this project was to examine a multitude of factors related to driving risk, including weather, road surface conditions and auto accidents en route.  The goal is to establish a measure to determine a person’s likelihood of encountering risk along a given route. In addition, the team was asked to validate a weather severity score previously developed by winningAlgorithms. To study auto accidents, data was utilized from the US Fatality Analysis Reporting System (FARS) to analyze the prevalence, cause, and risk associated with fatal car accidents. Attributes included the number of people involved, the number of people who died, weather, time of day, and time of year as factors plus additional factors to determine an overall probability of someone encountering an accident along a route as the road’s severity score. Additionally, weather data collected by wA’s avantCourse was gathered and used in conjunction with FARS. Findings indicate that during certain weather conditions, such as clear summer days, accident prevalence is greater than any other time or weather condition. Additional attributes and measures are being experimented with to develop the most reliable model.

Team Members: Kevin Rossi, Zachary Porcelli, Arber Isufaj, & Suofeiya Yin

Description: This project provides Elliot Hospital with financial insight towards their Case Mix Index (CMI). Elliot Hospital is looking for this insight to the causes of fluctuations in their CMI.  The M.S. student team sought to develop a solution in which they could better analyze past patients in order to budget correctly for the future. With data provided by Elliot Hospital, the team developed a dashboard that allows Elliot to view characteristics such as CMI, Length of Stay, Paid Amount, and Month by selecting three features at a time. When certain features are selected, such as Department, Payer, and Diagnosis Related Group, the dashboard produces a calculated CMI based on the selected inputs. This gives Elliot the ability to dig down and pinpoint the different areas of the health system that are showing fluctuation in the CMI. With this dashboard, Elliot Hospital will be able to prepare better for future patient’s needs. 

2016 Practicum Projects

The specific goal was to predict the duration of short term disability claims upon intake in order to optimize claim assignment and to resolve claim cost. The team also investigated the introduction of new technologies and methodologies to the traditional analytic approach previously used that held the potential to decrease the analytics Q&A and deployment timeframe by a substantial amount.

Team members: Kofi Ebakyea, Alex Booth, Alissa Andrews

This project proposed to derive a method for identifying associations between mental disorders and physical comorbidities using patient segmentation models based on patient demographics, metrics for resource utilization, and historical claims data. An interactive geospatial dashboard was further developed to optimize the location of a new care centers.

Team members: Justin Greenberg, Jon Vignaly, Pritti Joseph

This project examined the medical nature of the opioid epidemic in New Hampshire using the NH Comprehensive Health Care Information System (CHIS) to look at county based rates and trends in prescribing (opiates, treatment & blockers), mortality by drug use, diagnoses and SUDS (substance abuse disorder) within the state. Outcomes provided county-based analysis of prescribing and opioid use as well as treatment.  In addition, the outcomes exposed the value and limitations of public use data in performing this type of inquiry and suggested policy change and further research for improved future analysis.      

Team members: Adrienne Martinez, Carol Page

The UNH project defined factors related to student success at UNH. There were three primary objectives of this project: 1) To create segments of various kinds of undergraduate students; 2) to quantify predictors of success amongst those segments; and 3) to quantify psychographic predictors of success. Data included academic histories of students for five years, the first destination student survey, and primarily derived psychometric survey data conducted on UNH 

Team 1 members: Derek Naminda, Yuyu Zhou, Alyssa Cowan
Team 2 members: Rachel Cardarelli, Kevin Stevens, Chris Dunleavy

Learn. Apply. Lead.