Adam Asquini is a Director of Information Management & Data Analytics at KPMG in Edmonton. He is responsible for leading data and advanced analytics projects for KPMG’s clients in the prairies. Adam is passionate about building and developing high-performing teams to deliver the best possible outcomes for clients and to enable an engaging work experience for his teams. He has previously worked at AltaML as the Vice-President of Customer Solutions, the Government of Alberta as a Program Manager and in the Canadian Armed Forces as a Signal Officer. Having followed a non-traditional career path into AI, Adam is a big believer in harnessing the diversity and experience of cross-functional teams and also believes that anyone can join the growing AI community.
We sat down for our interview with Adam at the annual 2023 Upper Bound conference on AI that is held in Edmonton, AB and hosted by Amii (Alberta Machine Intelligence Institute).
You have a non-traditional career path, could just discuss how you got into AI?
I started my career in the Canadian Armed Forces as a signals officer, signals officers are responsible for IT telecommunication systems that help people communicate. So really, a lot of radio satellites. There was some data in there, but it was a lot of the core infrastructure technologies that we were responsible for, that initially started me into technology. I’d studied chemical engineering in university of all things, right off the start driven by my own curiosity and desire to learn. It started there and diving into technology upskilling and self-development were really important for me.
After 14 years in the military doing a number of different signals jobs, everything from working on a base and supporting IT and telecommunication services out in the field, setting up headquarters and communicating frontline units, supporting domestic operations like forest fires and floods, I moved on to the Alberta Provincial government. I was in program management looking at some cross-government technology initiatives. At the time, the government was centralizing IT, we were working with various government ministries to bring their services together and consolidate things, I did a lot of work there as well as in investment management. And really, in doing that work, I started to see some of the organizations leveraging data and analytics.
It really piqued my curiosity and always being curious and hungry to learn, I started actually pursuing some of that through either getting involved in some projects there or just doing self-study, things like Coursera or other training tools to learn a little bit more. I did a lot of reading, researched some of the vendors and the platforms that were providing these tools. I really became interested in data and analytics and through my own natural curiosity and desire to learn more, started to get more and more heavily involved in this over time.
Outside of Coursera, are there specific podcasts or books that you would recommend?
I follow a lot of different followers on LinkedIn, but a few that jump out to mind such as Emerj. Dan Faggella is the person behind it. He brings a lot of thought leadership to it. I certainly follow some of the mainstream ones like HBR and Forbes. A contact of mine named Andreas Welch who works at SAP, he releases a lot of content around AI and AI adoption, so I’ve been following him. I think as far as podcasts, there’s been a few that I’ve listened to and then books as well. A really good book that I’ve recently read is called Infonomics by Doug Laney. He’s former Gartner and MIT, and it’s a really good book to explain a monetization framework for data. I try to just immerse myself into as many things as possible, plus plug into project work to learn more.
How has your military experience benefited you in your current role?
In a couple of ways. I think some of the awesome core skill sets that I learned through my military career, a very structured approach to planning, which is really good. Time management and prioritization. In a military environment, it really forces you to learn what’s the most important thing and to work at a certain pace, assessing trade-offs and understanding how to best come up with a course of action that’s workable and that’s going to get you moving forward. I find in a fast-paced technology landscape like AI where things are just moving so fast, being able to process a lot of information and have a structured approach to be able to understand what’s important, what’s not important, where do you want to focus has been a good skillset.
The other big one is around leadership and teamwork. You’re working with a large organization. Out in the field, teams are being organized and reorganized all the time to get the best group together to complete a mission, having really strong interpersonal skills, leadership skills, communication skills are all skills that are really harped on in the training in the military, I think they’ve really leveraged some of those as well.
You were vice president of customer solutions at AltaML for over two years, what is AltaML and what were some interesting projects you worked on?
AltaML is an applied artificial intelligence machine learning company. It’s based out of Alberta, headquarters is in Edmonton, a large office in Calgary and also one in Toronto. What they do is they work with other businesses to develop software solutions and products that have AI at their core, it’s a business to business. The part of the organization I worked in was the services side, we’d work with oil and gas company financial institutions. We worked across a lot of different industry verticals. I worked with them to define business problems that were relevant and could make an impact to be solved with AI, and then worked them through the process of bringing their data together, building AI models, deploying them and working through the change management side as well so that they could be operationalized and used, really helping those organizations solve problems through building applied AI solutions.
The role was vice president of customer solutions. When I started, I was in a project manager role leading a few AI engagements, I then moved up over time, and the vice president of customer solutions role was responsible for the delivery function, resource management for projects and active account management, a lot of the client facing aspects of that work fell into my team.
As far as projects are concerned, there was a lot, I would say in one way, shape or form, as either a hands-on project manager, a coach or a quality assurance resource, dozens of AI projects that I would’ve worked on over the two and a half years, one of my favorite ones was a wildfire project. I worked with the governor of Alberta. They were struggling on days where there’s a moderate fire risk, to understand whether a fire is likely to occur in a particular area. When they were uncertain, their scheduling practice was to schedule whatever resources they had available, and that would include contracting additional resources, heavy equipment like bulldozers or airplanes, helicopters, which is of course expensive.
The purpose of the AI project was to predict for a given region what the probability of a fire would be for that region for the next day, to help them make decisions around the optimal resource allocation for a process they called pre-suppression, which is really the proactive scheduling and allocation of resources.
It was really cool to be able to see that in certain scenarios, you could draw down resources or just reduce the level or focus them at certain times of the day. That would save a lot of money but not really introduce a lot of material risk of missing a fire, millions of dollars of savings potential. That work has still carried on. Even today, they’re now looking at extending the time window out a little bit, making the zones smaller and more granular to better optimize resources. But looking at how the fire season we’ve had so far here in Alberta, any intelligence that you can provide upfront about where the risks are and being able to optimize resources or at least reallocate resources to the right places is really impactful work, it was really enjoyable.
I also did some work in claims processing as well. As an insurance provider would get thousands of claims coming in, which ones could be automatically approved, which ones would require a human review, or even which team a claims should be forwarded to for getting the right level of review. That type of work’s also really important and can save organizations a lot of effort and a lot of money in how they do their business,
You’re currently the director of information management and data analytics at KPMG. What does this role entail exactly?
I work with businesses to guide them through the journey of solving these problems through, in this case, a broader set of data and analytics capabilities. We work everything from data strategy up front and helping organizations organize data from disparate systems, bringing it together, reporting and analytics as well as AI and ML. It’s a bit of a broader role than my previous one, but that’s also really exciting to me. It fuels my passion for learning and self-development.
As a director, I’m usually working with senior leaders on the client side to help advise them through the journey, get them a sense of what it’s going to take, what those projects look like, how they can prepare. A big focus on adoption as well, especially with the advanced analytics systems that are new and that sometimes come with a negative connotation from a workforce, so really working with them on how to best implement these solutions as well as things like the processes they’re going to need, the structures they’re going to need. That’s a big part of the role. Internally, leading the engagement and leading the project teams, helping get the right priorities for the project team and guide the work as well as synchronization of different teams that are working on these projects.
In a recent interview with the Calgary Herald, you spoke about how there’s been a fair amount of AI adoption in Alberta. In what industries are you seeing this most in?
I’ve seen adoption across a number of different industries in Alberta. Certainly, energy has a lot of it, so I’ve seen use cases where organizations are using artificial intelligence to help optimize maintenance and safety inspections in pipelines, where should or could digs occur? Because digs are very expensive to do if there’s a suspected leak. I’ve also seen a lot in supply chain. As large organizations do mergers and acquisitions, their data’s all over the place. Sometimes, they really struggle with finding items in their material masters, so being able to use these language models that we’re seeing emerge right now to organize data, structure it in a way that it can be analyzed. We’ve seen significant work in consolidating supply contracts by just being able to better search and query and find information. That one can span across multiple industries, not necessarily just in energy but I’m seeing it applied there.
Safety is a big one, so using either image processing or even the language models to find the most relevant type of safety brief or safety inspection that should be occurring at a particular site. In financial services, a lot of work on personalizing the experience for a banking customer, providing the best possible advice and finding tailored solutions for people that are in different financial scenarios is a really important focus and we’ve seen a lot of work there. And then insurance. As I mentioned before, a lot of this triaging and claims processing. One more I’d maybe suggest too is forestry and natural resources land management, seeing a bit of an uptake in using satellite imagery to detect changes to land, being able to manage agreements on land and using those image processing techniques to be able to identify things that should or shouldn’t be there, or things that have changed over time.
It’s really exciting and we see different organizations are at different stages of their maturity. Some are just either starting or experimenting, others are further along and fully adopting, but most organizations are recognizing that if they don’t start or if they’re not moving forward on this, they’re going to be left behind and that’s going to create quite a competitive disadvantage for them, so the interest is really high across the board. Obviously, with generative AI capabilities it’s generating a lot of interest as well.
Talking about generative AI, how do you see this technology transforming the future?
I’m very excited for it. I see the potential. I also think it’s important to have the right controls in place for generative AI, I really do think there’s a lot of use cases there where this could be applied to make huge productivity gains or efficiency gains for business. Some of that like in the use case I just mentioned with the supply chain, that was leveraging some of those techniques even before ChatGPT was publicly announced. As far as where I see this going, one of the other cool trends I’m seeing is more and more of this technology is being embedded into mainstream business applications right now. Microsoft’s announced their Copilot tool that’s going to be integrated with your Microsoft Office apps, I saw in some of their material things like writing a briefing note and just prompting the word processor with, “Can you make this paragraph shorter?” And it just does it for you.
As those generative AI technologies get embedded straight into mainstream business applications, it’s going to force businesses to think about how and when they adopt them, how they control them, how they can monitor for quality assurance on the products that they’re producing. When it’s a whole standalone separate capability, it’s a little bit easier to slow play it or ignore it, but seeing this being embedded into mainstream business applications and platforms is really going to drive that discussion forward.
I’m also hoping that with this and the emphasis right now on the responsible use of this technology, that it does help organizations put an emphasis on responsible AI, putting the right processes, the right governance in place to really make sure that their AI solutions are being effectively built, the risk is being managed throughout the entire life cycle, that there’s follow-on checks and that you know, can trust the outputs of them. I’m hoping that this hype right now on the generative AI actually continues to drive that discussion with those capabilities forward.
Can you discuss how responsible AI and reducing AI bias is really important to you.
Absolutely. I think it has to be for a number of reasons. Most of the people that are building these systems will have pride in the work that they’re doing and they don’t want their systems to have that, so there’s going to be an internal need to have this to keep your workforce engaged and happy and protected. Legally, there’s examples out there where organizations have faced legal challenges or regulatory challenges for the bias in their AI. There’s a classic case study of an organization that was using AI in hiring. The data set was over overly biased towards men over women so that their AI discriminated against women.
That was an AI tool by Amazon.
Things like that have already occurred and have the potential to keep occurring if you don’t have the right controls in place, having a real focus on this is going to be critical for most organizations. And then reputational risk of course for organizations. If you get that wrong, that could have a huge, huge impact on your business.
You’re also a big believer in harnessing the diversity and experience of cross-functional teams. Why is diversity so important in your view?
Right now, the types of problems that are being solved with AI are so complex, from a business perspective, from the data that’s that underlies behind it, no one person or one role can solve all of these problems by themselves. Having a good cross-functional team with different perspectives and skill sets is really important, to be able to have people that are strong in one area really harnessing their strength. As far as the diversity piece comes in, Another really big driver of having a diverse team is that in most cases, the end user of these systems will be a diverse group of people, and not having those perspectives brought into your team when you’re building them really sets you up for making mistakes down the road or missing things, Things that I might not think about that someone else may and they bring that perspective forward. It’s easier to solve problems and adjust for that in the development cycle than it is after a release.
I also just believe strongly that having a different perspective is where you get the best dialogue, you get really good questions coming from people that are seeing something from a different lens. It forces conversation about how to best approach something. It makes you turn over some of those stones you might not have turned over if that person wasn’t there, having a diverse group of people looking at a problem really enables you to get the best possible outcome and best solution.
What do you think will be the next big breakthrough in AI?
In that generative AI lens, I think as we will see more of that technology being embedded into mainstream applications, and that’s already starting, That’s really going to be huge for the adoption of the technology because it’ll be right there on the systems that people are already using. It will be really, really important, and that might open the door to some of the other use cases as people become more familiar with what it can do, what its limitations are, how it can be optimally used, and that might just trigger people’s thinking and, okay, now I have a better sense of the type of problems this will solve. We have this problem. This would be really cool to solve and may open up some new doors.
I’m also hoping that that regulatory policy is a breakthrough that comes in the near future as well. I know that there’s a lot of movement at the law making level and regulatory level, but what I’m hoping is that individual businesses also figure out for themselves or get advice on how they need to be thinking about it and what are some of the internal controls that they should be putting in place now.
Laws and regulations take a long time. Businesses can drive a lot of change by taking on some of those controls internally and thinking through that. There is precedent for this, obviously with audits and things like that, something that KPMG is really strong in. But thinking about what those controls might be, how we might control it, how do we test outputs? How do we make sure that we’re reducing hallucinations? What are some of the additional steps after the model has produced its output that we can take to minimize any potential harm or risk? Those are the right types of questions and I’m hoping some of the hype, again, right now is a breakthrough on how we think about this and how we build the right structures, processes, and teams on the responsible AI side.
Thank you for the great interview, readers who wish to learn more should visit KPMG.
Credit: Source link
Comments are closed.