Ramprakash Ramamoorthy, is the Head of AI Research at ManageEngine, the enterprise IT management division of Zoho Corp. ManageEngine empowers enterprises to take control of their IT, from security, networks, and servers to your applications, service desk, Active Directory, desktops, and mobile devices.
How did you initially get interested in computer science and machine learning?
Growing up, I had a natural curiosity towards computing, but owning a personal computer was beyond my family’s means. However, thanks to my grandfather’s position as a professor of chemistry at a local college, I sometimes got the chance to use the computers there after hours.
My interest deepened in college, where I finally got my own PC. There, I developed a couple of web applications for my university. These applications are still in use today—a whole 12 years later—which really underlines the impact and longevity of my early work. This experience was a comprehensive lesson in software engineering and the real-world challenges of scaling and deploying applications.
My professional journey in technology started with an internship at Zoho Corp. Initially, my heart was set on mobile app development, but my boss nudged me to complete a machine learning project before moving on to app development. This turned out to be a turning point—I never did get an opportunity to do mobile app development—so it’s a little bittersweet.
At Zoho Corp, we have a culture of learning by doing. We believe that if you spend enough time with a problem, you become the expert. I’m really grateful for this culture and for the guidance from my boss; it’s what kick-started my journey into the world of machine learning.
As the director of AI Research at Zoho & ManageEngine, what does your average workday look like?
My workday is dynamic and revolves around both team collaboration and strategic planning. A significant portion of my day is spent working closely with a talented team of engineers and mathematicians. Together, we build and enhance our AI stack, which forms the backbone of our services.
We operate as the central AI team, providing AI solutions as a service to a wide array of products within both ManageEngine and Zoho. This role involves a deep understanding of the various product lines and their unique requirements. My interactions aren’t just limited to my team; I also work extensively with internal teams across the organization. This collaboration is crucial for aligning our AI strategy with the specific needs of our customers, which are constantly evolving. This is such a great opportunity to rub shoulders with the smartest minds across the company.
Given the rapid pace of advancements in AI, I dedicate a substantial amount of time to staying abreast of the latest developments and trends in the field. This continuous learning is essential for maintaining our edge and ensuring our strategies remain relevant and effective.
Additionally, my role extends beyond the confines of the office. I have a passion for speaking and travel, which dovetails nicely with my responsibilities. I frequently engage with analysts and participate in various forums to evangelize our AI strategy. These interactions not only help in spreading our vision and achievements but also provide valuable insights that feed back into our strategic planning and execution.
You’ve witnessed AI’s evolution since positioning ManageEngine as a strategic AI pioneer back in 2013. What were some of the machine learning algorithms that were used in these early days?
Our initial focus was on supplanting traditional statistical techniques with AI models. For instance, in anomaly detection, we transitioned from a bell curve methodology that flagged extremes to AI models that were adept at learning from past data, recognizing patterns and seasonality.
We incorporated a wide variety of algorithms—from support vector machines to decision-tree based methods—as the foundation of our AI platform. These algorithms were pivotal in identifying niche use cases where AI could significantly leverage past data for pattern finding, forecasting, and root cause analysis. Remarkably, many of these algorithms are still effectively in production today, underlining their relevance and efficiency.
Could you discuss how LLMs and Generative AI have changed the workflow at ManageEngine?
Large language models (LLMs) and generative AI have certainly caused a stir in the consumer world, but their integration into the enterprise sphere, including at ManageEngine, has been more gradual. One reason for this is the high entry barrier, particularly in terms of cost, and the significant data and computation requirements these models demand.
At ManageEngine, we are strategically investing in domain-specific LLMs to harness their potential in a way that’s tailored to our needs. This involves developing models that are not just generic in their application but are fine-tuned to address specific areas within our enterprise operations. For example, we are working on an LLM dedicated to security, which can flag security events more efficiently, and another that focuses on infrastructure monitoring. These specialized models are currently in development in our labs, reflecting our commitment to leverage the emergent behaviors of LLMs and generative AI in a way that adds tangible value to our enterprise IT solutions.
ManageEngine offers a plethora of different AI tools for various use cases, what’s one tool that you are particularly proud of?
I’m incredibly proud of all our AI tools at ManageEngine, but our user and entity behavior analytics (UEBA) stands out for me. Launched in our early days, it’s still a strong and vital part of our offerings. We understood the market expectations and added an explanation to each anomaly as a standard practice. Our UEBA capability is constantly evolving and we carry forward the learnings to make it better.
ManageEngine currently offers the AppCreator, a low-code custom application development platform that lets IT teams create customized solutions rapidly and launch them on-premises. What are your views on the future of no code or low code applications? Will these eventually take over?
The future of low-code and no-code applications, like our AppCreator, is highly promising, especially in the context of evolving business needs. These platforms are becoming pivotal for organizations to extend and maximize the capabilities of their existing software assets. As businesses grow and their requirements change, low-code and no-code solutions offer a flexible and efficient way to adapt and innovate.
Moreover, these platforms are playing a crucial role in IT enabling businesses. By offering evolving tech, like AI as a service, they significantly lower the entry barrier for organizations to sample the power of AI.
Could you share your own views on AI risks including AI bias, and how ManageEngine is managing these risks?
At ManageEngine, we recognize the serious threat posed by AI risks, including AI bias, which can widen the technology access gap and affect critical business functions like HR and finance. For example, stories of AI exhibiting biased behavior in recruitment are cautionary tales we take seriously.
To mitigate these risks, we implement strict policies and workflows to ensure our AI models minimize bias throughout their lifecycle. It’s crucial to monitor these models continuously, as they can start unbiased but potentially develop biases over time due to changes in data.
We’re also investing in advanced technologies like differential privacy and homomorphic encryption to fortify our commitment to safe and unbiased AI. These efforts are vital in ensuring that our AI tools are not only powerful but also used responsibly and ethically, maintaining their integrity for all users and applications.
What is your vision for the future of AI and robotics?
The future of AI and robotics is shaping up to be both exciting and transformative. AI has certainly experienced its share of boom and bust cycles in the past. However, with advancements in data collection and processing capabilities, as well as emerging revenue models around data, AI is now firmly established and here to stay.
AI has evolved into a mainstream technology, significantly impacting how we interact with software at both enterprise and personal levels. Its generative capabilities have already become an integral part of our daily lives, and I foresee AI becoming even more accessible and affordable for enterprises, thanks to new techniques and advancements.
An important aspect of this future is the responsibility of AI developers. It is crucial for builders to ensure that their AI models are robust and free from bias. Additionally, I hope to see legal frameworks evolve at a pace that matches the rapid development of AI to effectively manage and mitigate any legal issues that arise.
My vision for AI is a future where these technologies are seamlessly integrated into our daily lives, enhancing our capabilities and experiences while being ethically and responsibly managed.
Thank you for the great interview, readers who wish to learn more should visit ManageEngine.
Credit: Source link
Comments are closed.