Menu

Five Questions to Consider to Shift from Data Collection to Outcomes Measurement

Five Questions to Consider to Shift from Data Collection to Outcomes Measurement

Free Lots Of Numbers Stock Photo

In an age where information is abundant and storage is cheap, it's tempting to equate volume with value. This is especially true in academic institutions, where the pursuit of knowledge is relentless and often equated with progress. But there's a stark difference between collecting data for the sake of it and gathering insights to measure meaningful outcomes.

Consider the library. If a university were to simply count the number of books it acquires each year, it would have a lovely data point. But does that tell us if students are actually reading these books? Or if the contents of these books are enhancing their educational journey?

At its worst, collecting data just for the sake of it can create noise. It distracts from the critical indicators that truly matter and can lead to misallocated resources, diluted focus, and a false sense of achievement.

But here's where the magic happens: when universities pivot from passive collection to active measurement. When they begin to ask, "What outcomes are we aiming for? And how can this data guide us there?" Suddenly, the sea of numbers crystallizes into a roadmap. It becomes a tool for continuous improvement, pointing to successes to be replicated and challenges to be addressed.

Data collection without thorough reflection leads to data outputs, much like what a calculator produces when you type 2 + 2 =. It is cold, dry, and ineffective at inspiring change to foster improvement. However, data collection with a clear understanding of intended outcomes and data choices can help you craft and share a narrative with your team and administration that can build support for your cause.

So, as we delve into thinking about how to move from outputs to outcomes, remember: Data, in its purest form, is just potential. It's the interpretation, context, and purpose we assign to it that transforms it into a catalyst for change. When you are thinking about the cost-benefit of collecting that next data point, be sure to think about the following:

Free Happy young black male freelancer throwing papers while celebrating successful project during remote work in green park Stock Photo

 

1. What is your intended outcome? What are you trying to accomplish?

 

Consider a scaffolded approach to your response. The upper level reflects the ultimate intended outcomes; i.e. the outcomes one might present to upper administration. The lower level reflects the work done to achieve upper level outcomes. This is the data that helps you craft the narrative to support your case, and provides a road map for growth and improvement on upper level outcomes. In other words, upper level outcomes are side effects of lower-level outcomes.

 

For example, career services programming is often developed with the purpose of improving student skill in each of the career competencies as described by NACE. However, many career services departments only collect data on the number of students receiving internships and landing jobs in their first year post graduation. Those are great upper level data points, but they are mere side effects of departmental efforts to improve student career competencies, neglecting to inform the department about what is working and what is not. 

 

To put it another way, if the number of students receiving internships is low or on a downward trend (assuming industry has not changed), there is no indication as to why they are low, or what areas to focus on to improve the numbers. The obvious solutions may be to a) improve existing programming; b) create more programming; or c) get more industry partners. On what can we base our decision? If we choose to improve existing programming, how do we know which programs are less effective and require improvement? If we choose to do more programming, what type of programming is needed? Awareness of services? Professional writing? Public speaking? If we choose to get more industry partners to increase the number of students landing internships, where are the industry gaps that may be contributing to the problem? 

 

The numbers that told us there was a problem cannot help answer any of these questions, thereby are rendered useless in finding a solution. However, by identifying the lower level intended outcomes through student learning, i.e. assessing student learning of the career competencies, the measurement becomes a series of guideposts to continuous improvement with the beneficial side effect of increasing the upper level outcomes such as the number of students landing an internship or job. 

 

It comes down to having a deep, clear understanding of what the goals are and how they support each other. Whether you are trying to obtain funding, improve program quality, or fill a knowledge gap, strategize from the top down and identify the intended outcomes as they pertain to student learning. The answer to these questions can help you decide what kind of information to collect.

 

Free Shallow Focus Photography of Man Wearing Red Polo Shirt Stock Photo

 

2. Why are you collecting this data point/information?

 

Sometimes we collect data because it is easily available, not because it is the best measure of our intended outcome. Maybe we are using the (increased) count of counseling center visits to determine there is a mental health crisis on campus and we need to hire more counselors. That data is readily available and is arguably suggestive of increased mental health care needs. It may also be suggestive of an increase in help-seeking behaviors, a decrease in mental health care stigma, better marketing of services, or word got around that the newest counselor is incredibly skilled and good-looking. Based on that one data point, we cannot assume that the cause of the increased number of counseling center visits is in fact generalized poorer mental health in students.

 

A better measure of mental health on campus might be a climate survey. There are many benefits to a survey like this, but they can also be time and cost-prohibitive. A homegrown survey could serve as a decent substitute. We’ve seen several surveys circulate that have been developed by student groups or academic departments, showing that there's already a keen interest in understanding the mental well-being of the campus community. Such grassroots efforts have the advantage of being tailored to specific campus cultures and needs, thus potentially capturing more nuanced data. 

 

However, the potential downside of homegrown surveys is the lack of standardized measurement tools and potential biases that might influence results. Regardless, the importance of understanding the mental health landscape cannot be overstated. Even if imperfect, these efforts can provide crucial insights, start necessary conversations, and guide interventions. In a time when mental health challenges are on the rise, having a pulse on the well-being of our academic community is essential for fostering a supportive environment.

 

Good quality measurements also need to address the potential disparities between perceived need and true need. Surveys asking opinions generate responses of perceived needs. This can be valuable information but before determining an action plan based on these perceptions, one must differentiate between the genuine needs and the ones that might be more superficial or temporary in nature. Often, true needs may be hidden beneath layers of personal biases, societal influences, or misinformation. 

 

To accurately assess the real situation, combining qualitative feedback from surveys with quantitative data from other reliable sources can be beneficial. This multifaceted approach ensures that decisions are not solely based on transient opinions but are rooted in comprehensive insights. Furthermore, addressing only perceived needs might lead to solutions that are short-lived and do not tackle core issues. By considering both the perceived and actual needs, organizations can develop more sustainable and effective strategies, ensuring the allocation of resources is done wisely and with the long-term perspective in mind.

 

Free Close-up Photo of Survey Spreadsheet  Stock Photo

 

3. What does this information tell you? Is it really telling you what you think it is?

 

Collecting data because it is easily available is commonplace and often required. The key is to use it wisely, with full understanding of what the data is truly telling you versus what our preconceived ideas of what we want it to tell us. Proxy measures are indirect measures of the intended outcome. Though it may be closely related to the intended outcome, there is plenty of room for misinterpretation. 

 

Surveys are proxies because they do not get 100% response rates. The only true measure you have is how many students were comfortable enough to answer the survey question. Everything else gleaned from the survey responses are proxies, or indirect measures. Let’s say you get an incredible 50% response rate on a survey. The half of the sample that responded to the survey may be fundamentally different than the half who did not, leaving you with biased data. In some instances, you may be able to compare demographic information of those who responded with those who did not, which may provide some insight into the differences between groups. Regardless of possible exploration of those who did not respond, the differences must be considered when interpreting proxy measures, dare I say acknowledged in the final report.  

 

Even good quality surveys return biased data. Often, we create our own surveys developed to understand our intended outcomes. It is a low cost, specific way to meet our needs. The downside, however, is the questions we develop inherently come from a biased lens. An unskilled but good-natured practitioner may ask leading questions to receive the anticipated or hoped-for response.

 

The biases get amplified when using proxy data because you chose the questions to get the data that say what you want it to say. To prevent this, work with other offices on campus to help you come up with unbiased questions. Let others review your survey and provide feedback. We all have blind spots and the only way to make them visible is to take a team approach to developing proxy measures.

 

Free Couple talking while moving in new apartment Stock Photo

 

4. What does it cost to collect? How much capital are you willing or able to extend to get the data you need?

 

As student affairs assessment professionals, we deeply understand the complexities of collecting meaningful and reliable data for assessment. This process requires substantial time, energy, and resources. It's essential for stakeholders to recognize the significance of data-driven decisions in higher education.

 

Data collection, when meticulous, can span from weeks to years. The pre-assessment stages, such as goal setting, instrument design, and approvals, set the stage for the actual data gathering. Subsequently, data cleaning, analysis, and report generation can further elongate the timeline, all crucial for result reliability.

 

Human effort in gathering quality data is immense, encompassing staff training, department coordination, ethical concerns, and participant engagement. Challenges like low response rates or technological issues can intensify the cognitive strain on professionals and students alike. With survey fatigue prevalent, our approach must be strategic and intentional.

 

Financial aspects are equally pivotal. While direct costs involve software investments or hiring consultants, indirect costs might relate to staff hours redirected from other duties. For instance, if student workers are counting heads in a library, we must evaluate the data's worth against the worker's pay. These costs, however, are investments. Proper assessment can result in long-term savings or fund generation.

 

A campus we are friendly with recently launched a bike share program which highlights all of these potential costs. To begin, they had to put energy into writing a grant to purchase the initial bikes. Operational money went toward installing trackers on the bike, along with all of the necessary maintenance. The tracking information is the rich data the campus was hoping to use to glean insights into student behavior and determine areas of campus to consider adding more bikes, bike lanes, and other bike friendly amenities. While students were mostly excited about the program, some also voiced concerns about being tracked and the potential implications. Then, within the first three months, the local police had already reached out asking for tracker information to see if any students happened to be nearby a weekend crime when it occurred. The costs—both direct and indirect—had to be weighed against the value of the data to determine if the program was sustainable in the longer term.

 

Free Man Sitting on a Green Grass Field Stock Photo

 

5. Is there another way to get to your intended outcome? Are we collecting data in the best way we can?

 

Are we collecting data in the best way we can? While our methods are rooted in established best practices, the dynamic nature of the educational environment and our evolving understanding of students' needs necessitate periodic introspection. Let’s go back to the library example from the introduction. They were reporting on the number of books purchased each year and the number of books in their collection with the intended outcome of convincing administration to expand the library. Is that the wrong approach? No. Are there other approaches that might be more effective? Probably. What should they reflect on if they want to envision new ways of making their case?

 

First, let's consider the diversity of our student populations. Traditional methods may not capture the nuanced experiences of all students, particularly those from underrepresented or marginalized backgrounds. Are we considering culturally responsive and inclusive methods in our data collection? For instance, relying heavily on quantitative data might miss the rich narratives that qualitative methods can unveil. A balanced approach could yield a more holistic understanding.

 

Technology is another pivotal consideration. The advent of advanced analytics, machine learning, and big data has revolutionized many sectors. Within higher education, are we leveraging these tools to their full potential? While we’ve begun to tap into digital analytics to trace online student engagement patterns, there might still be untapped areas where technology can refine our data collection.

 

The ethical dimension of data collection also stands out. While we strive to maintain confidentiality and uphold the highest ethical standards, are we transparent enough with our students about how their data is used? Are they aware of their rights and the purpose behind data collection? Building trust is paramount to the authenticity of the data we gather.

 

Lastly, the feedback loop deserves attention. While we're adept at collecting data, how effectively are we communicating our findings back to the student body and involving them in subsequent decision-making processes? Engaging students as partners in assessment can enhance the relevancy and utility of our data.

 

The landscape of data in universities is vast and teeming with potential. However, the true power of data is not in its abundance, but in its strategic application. As institutions dedicated to learning and growth, it is imperative that we move beyond mere accumulation and towards insightful interpretation. By doing so, we can ensure that our efforts are not just about ticking boxes or filling spreadsheets but about genuinely enhancing the educational experiences of our students. Each data point, when collected with intent and contextualized properly, can shed light on areas of excellence and those needing attention.

 

The ultimate goal is to transform this data from mere numbers into actionable strategies, driving positive change and fostering a culture of continuous improvement. As we navigate the intricate terrains of assessment, let's always prioritize quality over quantity, purpose over passivity, and outcomes over outputs. This discerning approach to data will undoubtedly pave the way for more informed decisions, better resource allocation, and a brighter future for our communities.

 

Blog written by Will Miller and Greta LeDoyen

 
 
 

Go Back

Comment