Featured customer case
Lumada customer case code: UC-01872S
Using AI to reproduce customer evaluation criteria and improve the efficiency of interviews
2022-10-27
An increasing number of companies have introduced job-based human-resource management, in which each employees’ duties and roles are clearly defined and their achievements are evaluated. There is a growing need for career interviews and 1-on-1 meetings to ensure appropriate placement of talent in a company and appropriate career advancement.
Connected with this, companies need to ensure that interviews are held and human resources (often referred to as “talent”) are evaluated properly. This article describes customer cases that support human-resource evaluations based on AI models, in which the AI learns the knowledge of expert interviewers.
Note: An AI model is software that has been trained on a set of data to recognize certain types of patterns. AI models use various algorithms to learn from and reason over this data, with the general goal of solving business problems. An AI model functions as the software equivalent of a human expert in a specific field.
Utilizing AI models that have learned interview-related knowledge of experts results in interviews in which the interviewee feels like a sympathetic expert is listening.
This service can build AI models that can evaluate the skills of not only the talent being interviewed, but also counselors, sales representatives, online classroom instructors, and many other industries and occupations, thereby supporting communication and management.
Co-create with Lumada!
Utilizing AI that has learned the judgment criteria of experts
To create a work environment in which diverse talent can thrive, it is necessary to quantitatively evaluate each individual’s personality, aptitude, and skills.
To provide continuous support for the career development of employees, companies must create opportunities for frequent interviews at appropriate times, and must also evaluate all their employees fairly.
However, the following issues occur when interviewers carry out numerous face-to-face interviews:
The quality of an interview depends on the experience and skill of the interviewer, and different interviewers produce inconsistent evaluations. To evaluate talent appropriately, companies need to bridge the gaps in expertise levels among interviewers and need to use consistent evaluation criteria. Another issue is raising the skill level of inexperienced interviewers.
In addition, if each interviewer has to interview a large number of employees, scheduling and setting up interviews becomes very difficult. To continuously support employees through interviews, companies need to put a system in place that enables high-quality interviews to be held at appropriate times.
An effective way to evaluate employees fairly and to continuously support their career development is to use quantitative evaluations, which prevent inconsistencies.
Hitachi uses its original AI technology to create AI models that learn tacit knowledge such as the experience and intuition of expert interviewers. Utilizing these AI models leads to solutions to the issues faced in interviews.
In addition, by referring to the evaluation results of the AI models, even inexperienced interviewers can conduct interviews as if an expert interviewer was participating. This allows them to learn the judgment criteria used by expert interviewers and improves their interviewing skills.
AI learns tacit knowledge that is based on experience and intuition
The quality and efficiency of interviews can be improved by creating and utilizing AI models, which resemble copies of expert interviewers.
Continuous support for interviewees with high-quality interviews utilizing AI models
Having learned the interview-related knowledge of expert interviewers, the AI model aids the skill evaluations of diverse personnel in a way that results in the interviewee feeling like a sympathetic expert is listening.
By creating and utilizing an AI model that learns from the evaluations of expert interviewers based on video and audio records of their interviews, expert interviewers can focus on supporting the interviewees.
AI models analyze both verbal and nonverbal information. Verbal information includes the words spoken and their content, and nonverbal information includes the interviewee’s facial expressions and eye movements. Based on their analysis, the AI models provide comprehensive evaluation predictions regarding the interviewee. By viewing the AI model’s evaluation predictions, relevant staff can decide on a support policy and decide what further action should be taken. This enables the staff to respond appropriately. For example, staff might arrange for an expert interviewer to conduct individual follow-up interviews.
The Interview Support AI Service (Japanese) uses an ensemble AI engine developed originally by Hitachi. This engine learns the tacit knowledge of expert interviewers and creates multiple AI models, which resemble copies of expert interviewers. The predictions made by AI models can be used to aid the evaluation from an interview.
With generic AI services, a specialized AI model must be created for each additional task or skill that is to be evaluated.
In contrast, when multiple AI models are used for evaluation, the evaluation given by each AI model is judged in a comprehensive manner, as if multiple interviewers made evaluations and then the results were compared and discussed. The optimal solution is derived through a majority voting system that adopts the evaluation results made by the greater number of AI models.
UT Group Co., Ltd., a human-resources service company, conducts career interviews to better understand the aspirations and state of their employees and thereby support the career development of employees working as dispatched staff. With a limited number of career counselors who can conduct interviews, the company faced the following issues:
In response to these issues, we conducted a PoC (proof of concept) project in which we used the Interview Support AI Service to create AI models based on the video and audio records of interviews between expert career counselors and employees, and also the evaluation results from those interviews. The evaluations predicted by the AI models were comparable to those of expert career counselors. The PoC project thus confirmed the feasibility of this method to resolve their issues.
A co-creation project with UT Group Co., Ltd., is taking the next step to use avatars to conduct interviews that can be conducted easily, anytime, anywhere, and on various devices, and to verify the value provided by the AI models. We will continue to work on development for internal deployment in UT Group Co., Ltd., on applying the AI models in actual work, and on expanding our co-creation business.
With increasingly diverse working styles, interviews for evaluating talent are becoming more important, and there is an increasing need to also evaluate the personalities and characteristics of the talent. We plan to release functionality for the Interview Support AI Service that enables continuous training of the created AI models through repeated interview evaluations and feedback. This will provide even greater precision in talent evaluations.
In addition, because of its ability to support a variety of online work, we will expand the fields where this solution can be utilized, encouraging its introduction not only in the human-resources services industry, but also in education, healthcare, finance, and many other industries.
For details on our solutions, see the following webpages.
Utilizing AI models that have learned interview-related knowledge of experts results in interviews in which the interviewee feels like a sympathetic expert is listening.
This service can build AI models that can evaluate the skills of not only the talent being interviewed, but also counselors, sales representatives, online classroom instructors, and many other industries and occupations, thereby supporting communication and management.