Faisal Ahmed. Co-Founder & CTO at Knockri – Interview Series

Faisal Ahmed is a technology leader and product enthusiast with a passion for deep learning and AI, with the ability to design and develop complex technological solutions to everyday problems. He is a computer engineer who completed his Master’s in Information Systems and Design at University of Toronto and has more than 8 years of industry experience as a software engineer working for Sony Corp India, and CPPIB Canada.

He is both the Co-Founder & CTO at Knockri, a behavioural skills assessment platform that improves diversity without impacting work performance or hiring efficiency.

Could you define the origin of the name Knockri and why it was chosen?

There are two etymological origins for the name Knockri. First, we used the word ‘Knock’ to refer to knocking on the door of opportunity. The second, very much related to mine, Maaz’s, and Jahanzaib’s heritage, is the word ‘Knockri,’ which means ‘job’ to over a billion people in many South Asian languages.

Could you share the genesis story of Knockri and how it originated from a personal experience in the hiring process?

The idea for Knockri initially came to fruition in 2016 when our CEO, Jahanzaib, had a challenging time finding work because of his name. After Maaz suggested he anglicize his name on resumes, Jahanzaib was offered more employment opportunities. It was a seemingly minor change, but it made the world of difference. Noticing this significantly flawed part of the recruitment process, we came together to develop a solution with the use of technology. That is where I came in. In collaboration with Jahanzaib and Maaz, I wanted to use technology to level the playing field for candidates. Since AI was in its infancy at the time, I knew it would be an ambitious project to take on, but the challenge made me want to address it even more. We saw the potential of machine learning to reduce barriers for candidates back in 2016, and I have been excited to solve the problem of hiring bias ever since.

Why is most current Diversity, equity and inclusion (DEI) training failing both companies and society in general?

DEI training is failing because company motivation might not be in the right place. There are political and metrics-driven motivations that just do not enact real change and cause greater divides in biases. Writing a mission statement and enforcing corporate-directed bias training, where motivations solely lie “doing it because we have to” does not make a difference. Actions speak a lot louder than words, and being motivated for the right reasons, such as productivity or diverse viewpoints among your employees for company success, will make training far more effective.

You were formerly an appointed member to the World Economic Forum’s global council on equality and inclusion, what did this role entail and what did you learn from this experience?

During our work with the World Economic Forum, we contributed to the Diversity, Equity and Inclusion 4.0 Toolkit for leaders to accelerate social progress in the future of work. The report explores the practical opportunities and risks that rapidly emerging technologies represent for diversity, equity and inclusion efforts. It also outlines how technology can help reduce bias from recruitment processes, diversify talent pools, and benchmark diversity and inclusion across organizations. In addition to this, we contributed to the Forum’s new Global Social Mobility Index 2020, which provides insight into the condition for breaking the link between socio-economic background and an individual’s outcomes in life. We learned the importance of social progress and are proud to have contributed our work to the Forum for furthering DEI initiatives with the help of technology for the future.

How does Knockri use AI to enable employers to overcome unconscious bias during the hiring process?

Knockri has married I/O Psychology and machine learning to help identify behaviors related to specific skillsets that are required for certain jobs. Instead of focusing on the way a person speaks, looks, their gender, or where they are from, we focus on the skills they bring to the table. With Knockri, first impressions do not matter as much as in a traditional interview – it is all the important skills you can bring to the table to be the best at a job. In fact, we have made it possible to choose the right candidate without even seeing their name!

Can you define in your view what makes an ethical AI system and how and how Knockri is able to avoid bias in its own AI platform?

AI is a powerful tool in general and can be used for both good and bad. It is also a system that engineers, like myself, understand better than people understand people. We have considerable control over how the system behaves, and when it comes to building ethical systems, I believe as long as motivations are in the right place, it’s easier to train the models in a positive way. In an AI system, we can control its behavior and build within specific rules and processes to ensure that the AI’s decision-making is more ethical than trained human processes. And it doesn’t just stop there. There are always errors and blind spots that you might overlook when building a system, but at Knockri we always make sure results are being tracked and that our results are always giving the correct outputs. With a heavy critique of the system, our extensive testing through algorithms can identify if certain qualities are being favored, and algorithmically fix this to give us transparency.

Where does Knockri source the training data and ensure this data is not unintentionally biased?

Simply put, we use our own data. We don’t source any data from open sources that we don’t have control over. We take transcripts collected from assessments that have been designed by our I/O Psychologists to ensure the right questions and skills are identified. We then audit the annotations to make sure no language is favored – basically we take full control over our data collection. This ensures the data we collect is high quality and the results the algorithm gives us are much more effective. Because we want to be sure we are true to our mission of being DEI-focused, we have expanded our implementation to various fields, which means we work across different geographies, job positions, and industries to collect a well-rounded set of data. By having more diverse data, we can be sure it is less biased.

Is there anything else that you would like to share about Knockri?

There are a couple of things I’d like to share. The first is about I/O Psychology. Often people assume AI is intimidating, especially as it has been developing over the past several years. However, we are bringing together the age-old science of I/O Psychology – the study of human behavior – to drive our technology. Really, we are delivering I/O Psychology at scale with the help of machine learning to reduce bias in recruitment practices.  Finally, DEI doesn’t just start and end with our customers. At Knockri, we value DEI as much as those who come to us, which we pride ourselves on within our own organization. As a company who is passionate about DEI, we encourage you to do your part in reducing bias for those who historically might not have had as many opportunities in the job market. Like I said before, actions speak louder than words.

Credit: Source link

Comments are closed.