Recruiters are Teach For America’s lifeblood. They spend their time on college campuses across the country, working with juniors and seniors to inspire them to dedicate two years of their lives to Teach For America’s mission.
These recruiter-student interactions form the basis of a life-long relationship between Teach For America and a student who chooses to join TFA; therefore, recruiters are also responsible for a great deal of critical information in our Salesforce CRM. The tension between the data and human sides of their jobs led TFA to see recruiters as a potential audience for piloting some cognitive tools to reduce the time spent doing data work, and free them up to spend more time building relationships.
I was the chief researcher on the cognitive project team.
I conducted a pre-launch research study, getting on the phone with nearly twenty different recruiters to learn about their experiences recruiting. I worked with our business analysts, tech lead, and partners on recruiting to craft a series of “proofs of technology.” These “POT’s” were deisgned to quickly test the technical feasibility and quickly solicit user feedback.
After a series of seven POT’s, we settled on two pilots. I was responsible for articulating the vision to partner teams, and conducting longitudinal research over the six week pilot period to understand how users responded to a cognitive tool that was doing a part of their job for them.
I additionally co-wrote a grant for this research which gave Teach For America $200K to explore the role of cognitive technologies in the workplace.
I led a longitudinal diary study over Slack, checking in with our study participants at various intervals to see how they were feeling about the pilot, and ultimately did they trust it?
We uncovered a few interesting findings:
Users were incredibly not forgiving of the technology. If an intern made the mistake, they would work to correct it, or edit it themselves. They treated the cognitive tool as if a machine and expected machine-like precision. We helped mitigate this by giving the tool a name, “Ceci.” Users would then joke over slack, that “Ceci was having an off day,” rather than the frustration prior to the name.
Secondly, we set a benchmark for 80% accuracy based on what our engineers discovered in the POT phase. Users didn’t see 80% as sufficiently accurate to build trust. Users expectations were much higher than the technology was capable of delivering in the short pilot period we had.
Overall, the research from this pilot helped set the stage for some future engagements with other partner teams within the organization. Ultimately we didn’t meet our recruiters’ high enough expectations to warrant the trust needed for them to meet the goals set out at the beginning.