back Back

The “Shiny New Toy” Challenge: AI, Security, and the New Balancing Act 

Today

  • AI
  • AI Frauds
  • AI in Cybersecurity
Share

Mike Arrowsmith, Chief Trust Officer at NinjaOne
Mike Arrowsmith, Chief Trust Officer at NinjaOne

By Mike Arrowsmith, Chief Trust Officer at NinjaOne

It’s human nature to want to be a pioneer. To be the first to try, adopt, and experiment with the latest tools and technologies. See the crowds lining up for a product launch, or the latest iPhone release, and you’ll witness it first-hand – people are always looking for the latest, fastest, and most exciting new thing. This affinity for the ‘shiny new toy’ is an innate part of our human nature, but in business, this can cause real security and operational challenges if organisations don’t have the basics down.

Here’s how you can empower your workforce and technical teams to adopt new technologies and continue to experiment with ‘shiny new toys’ thoughtfully, without compromising security. It all starts with making sure you have basic security tenants in place from the get-go.

Laying the Internal Groundwork

Log onto LinkedIn or read any tech headline these days, and one of the first things you’ll see are two letters: AI. Though as the technology firms up and new platforms, solutions, and iterations emerge (on a near hourly basis), organisations are still grappling with what AI use cases look like in practice.

A recent survey from Gigamon found that mass AI adoption is causing businesses to overlook the essentials, often inadvertently providing adversaries with new opportunities to strike. The study found 91% of organisations make risky security compromises due to AI. And while AI investment surged by 62% in 2024, the WEF Global Cybersecurity Outlook Report found only 37% of companies had a process to assess security before AI tools are implemented.

If organisations want their AI investments to scale, they need to prioritise security just as much. AI deployments almost always start with the best intentions. Automating tasks, analysing threat patterns, or enhancing user experiences are just a few of the benefits that AI can offer internal IT and security teams. However, without proper guardrails or controls in place, these systems can bring with them new risks. Integrating AI into an IT ecosystem with unpatched systems, weak access controls, a lack of training, or poor visibility is like building a skyscraper on sand.

To build on more stable ground, organisations have to prioritise thoughtful AI adoption. This means thoroughly vetting and testing new AI-enabled solutions before integrating them into operational workflows, establishing clear, robust cybersecurity frameworks to underpin AI integration, and having foundational security practices in place before deploying the new AI tools. Comprehensive data management strategies, regular vulnerability assessments, automated backups, strict access controls, and effective endpoint management are just a few components that go into laying a firm security foundation – and they’re essential for trying out the latest shiny new toys.

Accounting for Individualised Employee Experiences

Personalisation is the name of the game for the new digital employee experience. Today’s employees expect to be able to work from anywhere (WFA) across a wide variety of devices (some folks do their best work on MacBooks, while others prefer Windows devices), and they expect to have a wide variety of tools and solutions at their disposal to make doing that work as easy as possible.

But supporting and securing employees across a diverse range of locations, devices, and applications is a tall order for IT and security teams to manage. Employees, often eager to play with the latest tools, are infamous for bypassing security protocols to access and install their favourite new app – often failing to recognise that they’re introducing a wide swath of new (and potentially unknown) risks to their organisation. In fact, recent research from Software AG found that half of all employees are using non-company-issued AI tools.

This is particularly true in environments where endpoints aren’t being effectively managed. 90% of successful cyberattacks start at the endpoint. And users experimenting with unauthorised AI tools (think: generative assistants) run the risk of leaking sensitive data or creating unintended openings for malicious actors without proper IT or security oversight. This can open organisations up to additional risk, regulatory fines, and real reputational damage.

While AI represents a generational opportunity for organisations and individuals alike, its promise can only be realised if organisations are able to manage the technology thoughtfully and securely. Organisations need to set parameters with employees around internal AI use and roll awareness trainings and education programs around any new emerging technologies to make sure their teams know the expectations and risks that come with new tools.

Building For a More Resilient Future

It’s up to IT and security teams to get employees the tools they need to do their best work while also keeping organisations secure. With all significant technical investments, organisations and technical leaders should be wary of adopting any ‘new’ shiny toy without a complete understanding of its impact. Additionally, they need a solid cybersecurity foundation in place, supported by ongoing employee awareness initiatives, so they can be thoughtful adopters of the technology, while also staying ahead of new risks.

The future of secure innovation and individualised business growth depends on balancing technical enthusiasm with disciplined IT and security hygiene. And whether it’s AI or the next technical development (our human aptitude for the shiny new thing isn’t going away anytime soon), having a secure organisational foundation in place will better position IT and security teams to adopt, experiment, and drive new growth while also enabling the best possible employee experience.

Previous Article

November 24, 2025

The Future of Trust in Insurance: Why Data Verification Is the New Risk Shield

Read More

IBSi News

Central Bank of Oman

November 26, 2025

AI

Central Bank of Oman unveils Maal, a homegrown payment card

Read More

Get the IBSi FinTech Journal India Edition

  • Insightful Financial Technology News Analysis
  • Leadership Interviews from the Indian FinTech Ecosystem
  • Expert Perspectives from the Executive Team
  • Snapshots of Industry Deals, Events & Insights
  • An India FinTech Case Study
  • Monthly issues of the iconic global IBSi FinTech Journal
  • Attend a webinar hosted by the magazine once during your subscription period

₹200 ₹99*/month

Subscribe Now
* Discounted Offer for a Limited Period on a 12-month Subscription



IBSi FinTech Journal

  • Most trusted FinTech journal since 1991
  • Digital monthly issue
  • 60+ pages of research, analysis, interviews, opinions, and rankings
  • Global coverage
Subscribe Now

Other Related Blogs

November 24, 2025

The Future of Trust in Insurance: Why Data Verification Is the New Risk Shield

Read More

November 20, 2025

1.2 Billion Bank Accounts and Still Financially Excluded: What Are We Missing?

Read More

November 12, 2025

Trust is the New Currency

Read More

Related Reports

Sales League Table Report 2025
Know More
Global Digital Banking Vendor & Landscape Report Q3 2025
Know More
NextGen WealthTech: The Trends To Shape The Future Q4 2023
Know More
Incentive Compensation Management Report Q3 2025
Know More
Treasury & Capital Markets Systems Report Q3 2025
Know More