Case Study— Cognitive in the Real World

Recruiters are Teach For America’s lifeblood. They spend their time on college campuses across the country, working with juniors and seniors to inspire them to dedicate two years of their lives to Teach For America’s mission.

These recruiter-student interactions form the basis of a life-long relationship between Teach For America and a student who chooses to join TFA;  therefore, recruiters are also responsible for a great deal of critical information in our Salesforce CRM. The tension between the data and human sides of their jobs led TFA to see recruiters as a potential audience for piloting some cognitive tools to reduce the time spent doing data work, and free them up to spend more time building relationships.

 

My Role

I was the chief researcher on the cognitive project team.

I conducted a pre-launch research study, getting on the phone with nearly twenty different recruiters to learn about their experiences recruiting. I worked with our business analysts, tech lead, and partners on recruiting to craft a series of “proofs of technology.” These “POT’s” were deisgned to quickly test the technical feasibility and quickly solicit user feedback.

[Research Study outline as shared with project team]

After a series of seven POT’s, we settled on two pilots.  I was responsible for articulating the vision to partner teams, and conducting longitudinal research over the six week pilot period to understand  how users responded to a cognitive tool that was doing a part of their job for them.

I additionally co-wrote a grant for this research which gave Teach For America $200K to explore the role of cognitive technologies in the workplace.

The Vision

This mockup was used to help leaders imagine how cognitive could drive the CRM of the future. Several of the things represented in here, such as the engagement graph were inspired by successful Proofs of Technology.

 

This mockup is a wireframe illustrating how we can represent several key user requirements in our existing CRM. The study revealed that users felt more confident in data they had seen with their own eyes, and they had great recall for where they found things. Recruits also occasionally guessed at things, and that certainty formed the basis for what they would talk about face-to-face. Therefore, for users to feel confident in the data, we needed to provide these familiar supports.

 

The  Study

I led a longitudinal diary study over Slack, checking in with our study participants at various intervals to see how they were feeling about the pilot, and ultimately did they trust it?

We uncovered a few interesting findings:

Users were incredibly not forgiving of the technology. If an intern made the mistake, they would work to correct it, or edit it themselves. They treated the cognitive tool as if a machine and expected machine-like precision. We helped mitigate this by giving the tool a name, “Ceci.” Users would then joke over slack, that “Ceci was having an off day,” rather than the frustration prior to the name.

Secondly, we set a benchmark for 80% accuracy based on what our engineers discovered in the POT phase. Users didn’t see 80% as sufficiently accurate to build trust. Users expectations were much higher than the technology was capable of delivering in the short pilot period we had.

Overall, the research from this pilot helped set the stage for some future engagements with other partner teams within the organization. Ultimately we didn’t meet our recruiters’ high enough expectations to warrant the trust needed for them to meet the goals set out at the beginning.