Showing posts with label RPA. Show all posts
Showing posts with label RPA. Show all posts

Daily Tech Digest - October 07, 2025


Quote for the day:

"There is only one success – to be able to spend your life in your own way." -- Christopher Morley



5 Critical Questions For Adopting an AI Security Solution

An AI-SPM solution must be capable of seamless AI model discovery, creating a centralized inventory for complete visibility into deployed models and associated resources. This helps organizations monitor model usage, ensure policy compliance, and proactively address any potential security vulnerabilities. By maintaining a detailed overview of models across environments, businesses can proactively mitigate risks, protect sensitive data, and optimize AI operations. ... An effective AI-SPM solution must tackle risks that are specific to AI systems. For instance, it should protect training data used in machine learning workflows, ensure that datasets remain compliant under privacy regulations, and identify anomalies or malicious activities that might compromise AI model integrity. Make sure to ask whether the solution includes built-in features to secure every stage of your AI lifecycle—from data ingestion to deployment. ... When evaluating an AI-SPM solution, ensure that it automatically maps your data and AI workflows to governance and compliance requirements. It should be capable of detecting non-compliant data and providing robust reporting features to enable audit readiness. Additionally, features like automated policy enforcement and real-time compliance monitoring are critical to keeping up with regulatory changes and preventing hefty fines or reputational damage.


The architecture of lies: Bot farms are running the disinformation war

As bots become more common and harder to tell from real users, people start to lose confidence in what they see online. This creates the liars dividend, where even authentic content is questioned simply because everyone knows fakes are out there. If any critical voice or inconvenient fact can be dismissed as just a bot or a deepfake, democratic debate takes a hit. AI-driven bots can also create the illusion of consensus. By making a hashtag or viewpoint trend, they create the impression that everyone is talking about it, or that an extreme position enjoys broader support than it appears to have.  ... It’s still an open question how well online platforms stop malicious, bot-driven content, even though they are the ones responsible for policing their own networks. Harmful AI bots continue to get through the defenses of major social media platforms. Even though most have rules against automated manipulation, enforcement is weak and bots exploit the gaps to spread disinformation. Current detection systems and policies aren’t keeping up, and platforms will need stronger measures to address the problem. ... The EU and the US are both moving to address bot-driven disinformation. In the EU, the Digital Services Act obliges large online platforms to assess and mitigate systemic risks such as manipulation, and to provide vetted researchers with access to platform data.


Is the CISO chair becoming a revolving door?

“A CISO is interacting with a lot of interfaces, and you need to have soft skills and communicate well with others. In many cases, you need to drive others to take action, and that’s super tedious. It’s very difficult to keep doing it over time,” Geiger Maor says. “In many cases, you’re in direct conflict with company goals and your goals. You’re like a salmon fish going upstream against everybody else. This makes it very difficult to keep a long tenure.” ... That constant exposure to risk and blame is another reason some CISOs hesitate to take the role in the first place, according to Rona Spiegel, senior manager, security and trust, mergers and acquisitions at Autodesk and former cloud governance leader at Wells Fargo and Cisco. “The bad guys, especially now with AI and automation, they’re getting more sophisticated, and they only have to be right once, but the CISO has to be right all day every day. They only have to be wrong once, and they get blamed … you’re an operational cost centre no matter what because you’re not bringing in revenue, so if something goes wrong … all roads lead to the CISO,” Spiegel says. ... Chapman is also seeing a rise in fractional CISOs, brought in part-time to set up frameworks or oversee specific projects. “It really comes down to the individual,” he says. “Some want that top seat, speaking to the board, communicating risk. But I am also seeing some say, ‘It doesn’t have to be a CISO role.’”


RPA versus hyperautomation: Understanding accuracy (performance) benchmarks in practice

RPA is like that reliable coworker who never complains and does exactly what you ask. It loves repetitive, predictable tasks such as copying and pasting data, moving files between systems or generating standard reports. When everything goes according to plan, RPA is perfect. ... Hyperautomation is the next-level upgrade. It combines RPA with AI, natural language processing (NLP), intelligent document processing (IDP), process mining and workflow orchestration. In simple terms, it doesn’t just follow rules. It learns, adapts and keeps things moving even when the world throws curveballs. With hyperautomation, processes that would have stopped RPA cold continue without a hitch. ... RPA and Hyperautomation are not rivals. They are more like teammates with different strengths. RPA shines when tasks are stable and repetitive, quietly doing its job without fuss. Hyperautomation brings in intelligence, flexibility and the ability to handle entire processes from start to finish. When applied thoughtfully, hyperautomation cuts down on manual corrections, handles exceptions smoothly and delivers value at scale. All this happens without the IT team needing to hire extra coffee runners to fix errors or babysit the robots. The real goal is to build automation that works at the process level, adapts to change and keeps running even when things go off script.


The pros and cons of AI coding in the IT industry

Although now being used by the majority of programmers, AI tools were not universally welcomed upon their launch, and it has taken time to move beyond the initial doubts and suspicion surrounding generative AI. It’s important to note that risks remain when using AI-generated code, which organizations will have to mitigate. “Integrating AI into our coding processes was initially met with skepticism, both within our organization and across the industry,” Jain explains. “Concerns included AI's ability to comprehend complex codebases, the potential for generating buggy code, adherence to company standards, and issues surrounding code and data privacy.” However, since the launch of the first generative AI tools at the end of 2022, Jain says that the rapid evolution of AI technology’s implementation has alleviated many concerns, with features such as codebase indexing and secure training protocols addressing major concerns. “These advancements have enabled AI tools to understand code context, follow company standards, and maintain robust security measures,” Jain tells ITPro. Nevertheless, security and accountability are also major factors for any IT company to consider when looking to use AI as part of the development process, and research continues to show glaring vulnerabilities in AI code. There are certain steps that simply can’t be replaced by AI.


Why AI Is Forcing an Invisible Shift in Risk Management

Without the need for complex, technical coding knowledge, there are increasingly more departments within a business capable of driving and contributing to the development lifecycle, forcing a shift from centralized innovation to development that is fractalized across the entire organization. This shift has been revolutionary, driving more lucrative development by empowering technical teams and business leaders to align on goals and work hand-in-hand. Still, this transition has changed the organization’s relationship with risk. ... In the age of distributed application building, organizations have to raise more questions as it relates to governance and risk, which can mean many different things depending on where the technology sits in the business. Is the application going to be customer-facing? How sensitive is the data? How should it be stored? What are some other privacy considerations? These are all questions businesses must ask in the age of fractured development — and the answers will vary from case to case. ... The shift to decentralized development is not the first change technology has seen, and it’s certainly not the last. The key to staying ahead of the curve is paying attention to the invisible shifts that come with these disruptions, such as the changes that have recently come with the adoption of AI and low code. As these technologies reimagine the typical risk management and compliance model, it’s important for businesses to come to terms with adaptive governance and react as such.


How cross-functional teams rewrite the rules of IT collaboration

When done right, IT isn’t just an optional part of cross-functional collaboration, it’s an integral part of what makes collaboration possible. “There’s a lot of overlap now between IT, sales, finance and regulatory compliance,” says George Dimov, managing owner of Dimov Tax. ... What happens when IT plays a key role in breaking down barriers? First, getting IT involved in cross-functional teams means IT is at the table from day one. Rather than having an environment where a department requests a report or tool from IT after the fact, or has it digitize information later on, IT is present in all meetings. As more organizations recognize the inherent importance of digital transformation, the need for IT expertise — including perspectives from individuals with different types of IT experience — becomes more pronounced. It’s up to the CIO to provide the cross-functional leadership that ensures IT is involved in such efforts from the start. ... Even in situations when IT isn’t directly involved in day-to-day collaboration, it can still play a valuable role by providing technology resources that aid and facilitate collaboration. Ideally, IT should be part of the solution to eliminate barriers, whether that’s through digital sharing tools, reporting mechanisms, or something else. IT can and should be at the forefront of enabling cross-functional collaboration between teams and departments.


Service-as-software: The new control plane for business

Historically, enterprises ran on islands of automation — enterprise resource planning for the back office and, later, a proliferation of apps. Customer relationship management was the first to introduce a new operating model and a new business model. Today, the enterprise itself must begin to operate like a software company. That requires harmonizing those islands into a single unified layer where data and application logic collapse into an integrated System of Intelligence. Agents rely on this harmonized context to make decisions and, when needed, invoke legacy applications to execute workflows. Operating this way also demands a new operations model: a build-to-order assembly line for knowledge work that blends the customization of consulting with the efficiency of high-volume fulfillment. Humans supervise agents, and in doing so progressively encode their expertise into the system. ... The important point to remember is that islands of automation impede management’s core function – planning, resource allocation and orchestration with full visibility across levels of detail and business domains. Data lakes do not solve this by themselves; each star schema is another island. Near-term, organizations can start small and let agents interrogate a single domain (for example, the sales cube) and take limited actions by calling systems of record via MCP servers, for example, viewing a customer’s complaints and initiating a return authorization.


Companies are making the same mistake with AI that Tesla made with robots

Shai Ahrony, CEO of marketing agency Reboot Online, calls this phenomenon the "AI aftershock." "Companies that rushed to cut jobs in the name of AI savings are now facing massive, and often unexpected costs," he told ZDNET. "We've seen customers share examples of AI-generated errors -- like chatbots giving wrong answers, marketing emails misfiring, or content that misrepresents the brand -- and they notice when the human touch is missing." ... Some companies have already learned painful lessons about AI's shortcomings and adjusted course accordingly. In one early example from last year, McDonald's announced that it was retiring an automated order-taking technology that it had developed in partnership with IBM after the AI-powered system's mishaps went viral across social media. ... McDonalds' and Klarna's decisions to backtrack on AI in favor of humans is reminiscent of a similar about-face from Tesla. In 2018, after Tesla failed to meet production quotas for its Model 3, CEO Elon Musk admitted in a tweet that the electric vehicle company's reliance upon "excessive automation…was a mistake." "Humans are underrated," he added. Businesses aggressively pushing to deploy AI-powered customer service initiatives in the present could come to a similar conclusion: that even though the technology helps to cut spending and boost efficiency in some domains, it isn't able to completely replicate the human touch.


How Can the Usage of AI Help Boost DevOps Pipelines

In recent times, AI is playing a key role in CI/CD by using machine learning algorithms and intelligent automation to detect errors proactively, optimize resource usage and faster release cycles. With AI, CI/CD pipelines can learn, adapt and optimize themselves, redefining software development from start to finish. By combining AI and DevOps, you can eliminate silos, recover faster from outages and open up new business revenue streams. Today’s businesses are increasingly leveraging artificial intelligence capabilities throughout their DevOps pipelines to make their CI/CD pipelines intelligent, thereby enabling them to predict problems faster, optimize the pipelines if needed, and recover from failures without the need for any human intervention. ... When you adopt AI into the DevOps practices in your organization, you are applying specific technologies to automate, optimize, and enhance each stage of the software development lifecycle – coding, testing, deployment, and monitoring. Today’s organizations are using AI in their DevOps pipelines to drive innovation, enabling teams to work seamlessly and achieve rapid development and deployment cycles. ... AI can help in DevSecOps in ways such as automating security testing, automating threat detection, and streamlining incident response. You can use AI-powered tools to scan your application source code for security vulnerabilities, automate software patches, automate incident responses, and monitor in real-time to identify anomalies.

Daily Tech Digest - July 10, 2025


Quote for the day:

"Strive not to be a success, but rather to be of value." -- Albert Einstein


Domain-specific AI beats general models in business applications

Like many AI teams in the mid-2010s, Visma’s group initially relied on traditional deep learning methods such as recurrent neural networks (RNNs), similar to the systems that powered Google Translate back in 2015. But around 2020, the Visma team made a change. “We scrapped all of our development plans and have been transformer-only since then,” says Claus Dahl, Director ML Assets at Visma. “We realized transformers were the future of language and document processing, and decided to rebuild our stack from the ground up.” ... The team’s flagship product is a robust document extraction engine that processes documents in the countries where Visma companies are active. It supports a variety of languages. The AI could be used for documents such as invoices and receipts. The engine identifies key fields, such as dates, totals, and customer references, and feeds them directly into accounting workflows. ... “High-quality data is more valuable than high volumes. We’ve invested in a dedicated team that curates these datasets to ensure accuracy, which means our models can be fine-tuned very efficiently,” Dahl explains. This strategy mirrors the scaling laws used by large language models but tailors them for targeted enterprise applications. It allows the team to iterate quickly and deliver high performance in niche use cases without excessive compute costs.


The case for physical isolation in data centre security

Hardware-enforced physical isolation is fast becoming a cornerstone of modern cybersecurity strategy. These physical-layer security solutions allow your critical infrastructure – servers, storage and network segments – to be instantly disconnected on demand, using secure, out-of-band commands. This creates a last line of defence that holds even when everything else fails. After all, if malware can’t reach your system, it can’t compromise it. If a breach does occur, physical segmentation contains it in milliseconds, stopping lateral movement and keeping operations running without disruption. In stark contrast to software-only isolation, which relies on the very systems it seeks to protect, hardware isolation remains immune to tampering. ... When ransomware strikes, every second counts. In a colocation facility, traditional defences might flag the breach, but not before it worms its way across tenants. By the time alerts go out, the damage is done. With hardware isolation, there’s no waiting: the compromised tenant can be physically disconnected in milliseconds, before the threat spreads, before systems lock up, before wallets and reputations take a hit. What makes this model so effective is its simplicity. In an industry where complexity is the norm, physical isolation offers a simple, fundamental truth: you’re either connected or you’re not. No grey areas. No software dependency. Just total certainty.


Scaling without outside funding: Intuitive's unique approach to technology consulting

We think for any complex problem, a good 60–70% of it can be solved through innovation. That's always our first principle. Then where we see any inefficiencies; be it in workflows or process, automation works for the other 20% of the friction. The remaining 10–20% is where the engineering plays its important role, and it allows to touch on the scale, security and governance aspects. In data specifically, we are referencing the last 5–6 years of massive investments. We partner with platforms like Databricks and DataMiner and we've invested in companies like TESL and Strike AI for securing their AI models. ... In the cloud space, we see a shift from migration to modernisation (and platform engineering). Enterprises are focussing on modernisation of both applications and databases because those are critical levers of agility, security, and business value. In AI it is about data readiness; the majority of enterprise data is very fragmented or very poor quality which makes any AI effort difficult. Next is understanding existing processes—the way work is done at scale—which is critical for enabling GenAI. But the true ROI is Agentic AI—autonomous systems which don’t just tell you what to do, but just do it. We’ve been investing heavily in this space since 2018. 


The Future of Professional Ethics in Computing

Recent work on ethics in computing has focused on artificial intelligence (AI) with its success in solving problems, processing large amounts of data, and with the award of Nobel Prizes to AI researchers. Large language models and chatbots such as ChatGPT suggest that AI will continue to develop rapidly, acquire new capabilities, and affect many aspects of human existence. Many of the issues raised in the ethics of AI overlap previous discussions. The discussion of ethical questions surrounding AI is reaching a much broader audience, has more societal impact, and is rapidly transitioning to action through guidelines and the development of organizational structure, regulation, and legislation. ... Ethics of digital technologies in modern societies raises questions that traditional ethical theories find difficult to answer. Current socio-technical arrangements are complex ecosystems with a multitude of human and non-human stakeholders, influences, and relationships. The questions of ethics in ecosystems include: Who are members? On what grounds are decisions made and how are they implemented and enforced? Which normative foundations are acceptable? These questions are not easily answered. Computing professionals have important contributions to make to these discussions and should use their privileges and insights to help societies navigate them.


AI Agents Vs RPA: What Every Business Leader Needs To Know

Technically speaking, RPA isn’t intelligent in the same way that we might consider an AI system like ChatGPT to mimic some functions of human intelligence. It simply follows the same rules over and over again in order to spare us the effort of doing it. RPA works best with structured data because, unlike AI, it doesn't have the ability to analyze and understand unstructured data, like pictures, videos, or human language. ... AI agents, on the other hand, use language models and other AI technologies like computer vision to understand and interpret the world around them. As well as simply analyzing and answering questions about data, they are capable of taking action by planning how to achieve the results they want and interacting with third-party services to get it done. ... Using RPA, it would be possible to extract details about who sent the mail, the subject line, and the time and date it was sent. This can be used to build email databases and broadly categorize emails according to keywords. An agent, on the other hand, could analyze the sentiment of the email using language processing, prioritize it according to urgency, and even draft and send a tailored response. Over time, it learns how to improve its actions in order to achieve better resolutions.


How To Keep AI From Making Your Employees Stupid

Treat AI-generated content like a highly caffeinated first draft – full of energy, but possibly a little messy and prone to making things up. Your job isn’t to just hit “generate” and walk away unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss (or worse, your audience). Always, always edit aggressively, proofread and, most critically, fact-check every single output. This process isn’t just about catching AI’s mistakes; it actively engages your critical thinking skills, forcing you to verify information and refine expression. Think of it as intellectual calisthenics. ... Don’t settle for the first answer AI gives you. Engage in a dialogue. Refine your prompts, ask follow-up questions, request different perspectives and challenge its assumptions. This iterative process of refinement forces you to think more clearly about your own needs, to be precise in your instructions, and to critically evaluate the nuances of the AI’s response. ... The MIT study serves as a crucial wake-up call: over-reliance on AI can indeed make us “stupid” by atrophying our critical thinking skills. However, the solution isn’t to shun AI, but to engage with it intelligently and responsibly. By aggressively editing, proofreading and fact-checking AI outputs, by iteratively refining prompts and by strategically choosing the right AI tool for each task, we can ensure AI serves as a powerful enhancer, not a detrimental crutch.


What EU’s PQC roadmap means on the ground

The EU’s PQC roadmap is broadly aligned with that from NIST; both advise a phased migration to PQC with hybrid-PQC ciphers and hybrid digital certificates. These hybrid solutions provide the security promises of brand new PQC algorithms, whilst allowing legacy devices that do not support them, to continue using what’s now being called ‘classical cryptography’. In the first instance, both the EU and NIST are recommending that non-PQC encryption is removed by 2030 for critical systems, with all others following suit by 2035. While both acknowledge the ‘harvest now, decrypt later’ threat, neither emphasise the importance of understanding the cover time of data; nor reference the very recent advancements in quantum computing. With many now predicting the arrival of cryptographically relevant quantum computers (CRQC) by 2030, if organizations or governments have information with a cover time of five years or more, it is already too late for many to move to PQC in time. Perhaps the most significant difference that EU organizations will face compared to their American counterparts, is that the European roadmap is more than just advice; in time it will be enforced through various directives and regulations. PQC is not explicitly stated in EU regulations, although that is not surprising.


The trillion-dollar question: Who pays when the industry’s AI bill comes due?

“The CIO is going to be very, very busy for the next three, four years, and that’s going to be the biggest impact,” he says. “All of a sudden, businesspeople are starting to figure out that they can save a ton of money with AI, or they can enable their best performers to do the actual job.” Davidov doesn’t see workforce cuts matching AI productivity increases, even though some job cuts may be coming. ... “The costs of building out AI infrastructure will ultimately fall to enterprise users, and for CIOs, it’s only a question of when,” he says. “While hyperscalers and AI vendors are currently shouldering much of the expense to drive adoption, we expect to see pricing models evolve.” Bhathena advises CIOs to look beyond headline pricing because hidden costs, particularly around integrating AI with existing legacy systems, can quickly escalate. Organizations using AI will also need to invest in upskilling employees and be ready to navigate increasingly complex vendor ecosystems. “Now is the time for organizations to audit their vendor agreements, ensure contract flexibility, and prepare for potential cost increases as the full financial impact of AI adoption becomes clearer,” he says. ... Baker advises CIOs to be careful about their purchases of AI products and services and tie new deployments to business needs.


Multi-Cloud Adoption Rises to Boost Control, Cut Cost

Instead of building everything on one platform, IT leaders are spreading out their workloads, said Joe Warnimont, senior analyst at HostingAdvice. "It's no longer about chasing the latest innovation from a single provider. It's about building a resilient architecture that gives you control and flexibility for each workload." Cost is another major factor. Even though hyperscalers promote their pay-as-you-go pricing, many enterprises find it difficult to predict and manage costs at scale. This is true for companies running hundreds or thousands of workloads across different regions and teams. "You'd think that pay-as-you-go would fit any business model, but that's far from the case. Cost predictability is huge, especially for businesses managing complex budgets," Warnimont said. To gain more control over pricing and features, companies are turning to alternative cloud providers, such as DigitalOcean, Vultr and Backblaze. These platforms may not have the same global footprint as AWS or Azure but they offer specialized services, better pricing and flexibility for certain use cases. An organization needing specific development environments may go to DigitalOcean. Another may chose Vultr for edge computing. Sometimes the big players just don't offer what a specific workload requires. 


How CISOs are training the next generation of cyber leaders

While Abousselham champions a personalized, hands-on approach to developing talent, other CISOs are building more formal pathways to support emerging leaders at scale. For others like PayPal CISO Shaun Khalfan, structured development was always part of his career. He participated in formal leadership training programs offered by the Department of Defense and those run by the American Council for Technology. ... Structured development is also happening inside companies like the insurance brokerage firm Brown & Brown. CISO Barry Hensley supports an internal cohort program designed to identify and grow emerging leaders early in their careers. “We look at our – I’m going to call it newer or younger – employees,” he explains. “And if you become recognized in your first, second, or third year as having the potential to [become a leader], you get put in a program,” he explains. ... Khalfan believes good CISOs should be able to dive deep with engineers while also leading boardroom conversations. “It’s been a long time since I’ve written code,” he says, “but I at least understand how to have a deep conversation and also be able to have a board discussion with someone.” Abousselham agrees that technical experience is only one part of the puzzle. 

Daily Tech Digest - June 11, 2025


Quote for the day:

"The key to success is to focus on goals, not obstacles." -- Unknown



The future of RPA ties to AI agents

“Unlike RPA bots, that follow predefined rules, AI agents are learning from data, making decisions, and adapting to changing business logic,” Khan says. “AI agents are being used for more flexible tasks such as customer interactions, fraud detection, and predictive analytics.” Kahn sees RPA’s role shifting in the next three to five years, as AI agents become more prevalent. Many organizations will embrace hyperautomation, which uses multiple technologies, including RPA and AI, to automate business processes. “Use cases for RPA most likely will be integrated into broader AI-powered workflows instead of functioning as standalone solutions,” he says. ... “RPA isn’t dying — it’s evolving,” he says. “We’ve tested various AI solutions for process automation, but when you need something to work the same way every single time —without exceptions, without interpretations — RPA remains unmatched.” Radich and other automation experts see AI agents eventually controlling RPA bots, with various robotic processes in a toolbox for agents to choose from. “Today, we build separate RPA workflows for different scenarios,” Radich says. “Tomorrow, with our agentic capabilities, an agent will evaluate an incoming request and determine whether it needs RPA for data processing, API calls for system integration, or human handoff for complex decisions.”


The path to better cybersecurity isn’t more data, it’s less noise

SOCs deal with tens of thousands of alerts every day. It’s more than any person can realistically keep up with. When too much data comes in at once, things get missed. Responses slow down and, over time, the constant pressure can lead to burnout. ... The trick is to start spotting patterns. Look at what helped in past investigations. Was it a login from an odd location? An admin running commands they normally don’t? A device suddenly reaching out to strange domains? These are the kinds of details that stand out once you understand what typical system behavior looks like. At first, you won’t. That’s okay. Spend time reading through old incident reports. Watch how the team reacts to real alerts. Learn which ones actually spark investigations and which ones get dismissed without a second glance. ... Start by removing logs and alerts that don’t add value. Many logs are never looked at because they don’t contain useful information. Logs showing every successful login might not help if those logins are normal. Some logs repeat the same information, like system status messages. ... Next, think about how long to keep different types of logs. Not all logs need to be saved for the same amount of time. Network traffic logs might only be useful for a few days because threats usually show up quickly. 


The EU challenges Google and Cloudflare with its very own DNS resolver that can filter dangerous traffic

The DNS4EU wants to be an alternative to major US-based public DNS services (like Google and Cloudflare) to boost the EU's digital autonomy by reducing European reliance on foreign infrastructure. This isn't only an EU-developed DNS, though. The DNS4EU comes with built-in filters against malicious domains, like those hosting malware, phishing, or other cybersecurity threats. The home user version also includes the possibility to block ads and/or adult content. ... The DNS4EU, which the EU ensures "will not be forced on anyone," has been developed to meet different users' needs. The home users' version is a public and free DNS resolver that comes with the option to add filters to block ads, malware, adult content, or all of these, or none. There's also a dedicated version for government entities and telecom providers that operate within the European Union. As mentioned earlier, the DNS4EU comes with a built-in filter to block dangerous traffic alongside the ability to provide regional threat intelligence. This means that a malicious threat discovered in one country could be blocked simultaneously across several regions and countries, de facto halting its spread. ... The Senior Director for European Government and Regulatory Affairs at the Internet Society, David Frautschy Heredia, also warns against potential risks related to content filtering, arguing that "safeguards should be developed to prevent abuse."


AgenticOps: How Cisco is Rewiring Network Operations for the AI Age

AI Canvas is where AgenticOps comes to life. It’s the industry’s first generative UI built for cross-domain IT operations, unifying NetOps, SecOps, IT, and executives into one collaborative environment. Powered by real-time telemetry from Meraki, ThousandEyes, Splunk, and more, AI Canvas brings together data from across the stack into one intelligent, always-on view. But this isn’t just visibility. It’s AI already operating. When a service issue hits, AI Canvas pulls in the right data, connects the dots, and surfaces a live picture of what matters—before anyone even asks. Every session starts with context, whether launched by AI or by an IT engineer. Embedded into the AI Canvas is the Cisco AI Assistant, your interface to the agentic system. Ask a question in natural language. Dig into root cause. Explore options. The AI Assistant guides you through diagnostics, decisions, and actions, all grounded in live telemetry. And when you’re ready to share, just drag your findings into AI Canvas. From there, with one click you can invite collaborators—and that’s when the canvas comes fully alive. Every insight becomes part of a shared investigation with AI Canvas actively thinking, collaborating, and evolving the UI at every step. But it doesn’t stop at diagnosis—AI Canvas acts. It applies changes, monitors impact and share outcomes in real time.


8 things CISOs have learned from cyber incidents

Brown believes there are often important lessons that come out of breaches, whether it’s high-profile ones that end up in textbooks and university courses, or experiences that can be shared among peers through conference panels and other events. “Always look for good to come from events. How can you help the industry forward? Can you help the CISO community?” he says. ... Many incident-hardened CISOs will shift their approach and their mindset about experiencing an attack first-hand. “You’ll develop an attack-minded perspective, where you want to understand your attack surface better than your adversary, and apply your resources accordingly to insulate against risk,” says Cory Michel, VP security and IT at AppOmni, who’s been on several incident response teams. In practice, shifting from defense to offence means preparing for different types of incidents, be it platform abuse, exploitation or APTs, and tailoring responses. ... The playbook needs clear guidance on communication, during and after an incident, because this can be overlooked while dealing with the crisis, but in the end, it may come to define the lasting impact of a breach that becomes common knowledge. “Every word matters during a crisis,” says Brown. “Of what you publish, what you say, how you say it. So, it’s very important to be prepared for that.”


The five security principles driving open source security apps at scale

Open-source AI’s ability to act as an innovation catalyst is proven. What is unknown is the downside or the paradox that’s being created with the all-out focus on performance and the ubiquity of platform development and support. At the center of the paradox for every company building with open-source AI is the need to keep it open to fuel innovation, yet gain control over security vulnerabilities and the complexity of compliance. ... Regulatory compliance is becoming more complex and expensive, further fueling the paradox. Startup founders, however, tell VentureBeat that the high costs of compliance can be offset by the data their systems generate. They’re quick to point out that they do not intend to deliver governance, risk, and compliance (GRC) solutions; however, their apps and platforms are meeting the needs of enterprises in this area, especially across Europe. ... “EU AI Act, for example, is starting its enforcement in February, and the pace of enforcement and fines is much higher and aggressive than GDPR. From our perspective, we want to help organizations navigate those frameworks, ensuring they’re aware of the tools available to leverage AI safely and map them to risk levels dictated by the Act.”


What We Wish We Knew About Container Security

Each container maps to a process ID in Linux. The illusion of separation is created using kernel namespaces. These namespaces hide resources like filesystems, network interfaces and process trees. But the kernel remains shared. That shared kernel becomes the attack surface. And in the event of a container escape, that attack surface becomes a liability. Common attack vectors include exploiting filesystem mounts, abusing symbolic links or leveraging misconfigured privileges. These exploits often target the host itself. Once inside the kernel, an attacker can affect other containers or the infrastructure that supports them. This is not just theoretical. Container escapes happen, and when they do, everything on that node becomes suspect. ... Virtual machines fell out of favor because of performance overhead and slow startup times. But many of those drawbacks have since been addressed. Projects leveraging paravirtualization, for example, now offer performance comparable to containers while restoring strong workload isolation. Paravirtualization modifies the guest OS to interact efficiently with the hypervisor. It eliminates the need to emulate hardware, reducing latency and improving resource usage. Several open source projects have explored this space, demonstrating that it’s possible to run containers within lightweight virtual machines. 


The unseen risks of cloud data sharing and how companies can safeguard intellectual property

For many technology-driven sectors, intellectual property lies at their core. This is particular to the fields of software development, pharmaceuticals, and design innovation. For companies in these fields, IP theft can have serious consequences. Unfortunately, cybercriminals increasingly target valuable IP because it can be sold or used to undermine the original creators. According to the Verizon 2025 Data Breach Investigation Report, nearly 97 per cent of these attacks in the Asia-Pacific region are fuelled by social engineering, system intrusion and web app attacks. This alarming trend highlights the urgent need for stronger data protection measures. ... While cloud platforms present unique challenges for securing IP, they also offer some potential solutions. One of the most effective ways to protect data is through encryption. Encrypting files before they are uploaded to the cloud ensures that even if unauthorised access is gained, the data remains unreadable without the proper decryption key. For organisations that rely on cloud platforms for collaboration, file-level encryption is crucial. This form of encryption ensures that sensitive data is protected not just at rest but throughout its entire lifecycle in the cloud. Many cloud platforms offer built-in encryption tools, but companies can also implement third-party solutions to enhance the protection of their intellectual property.


The Critical Role of a Data Pipeline in Security

By implementing a data pipeline and prioritizing the optimization and reduction of data volume before it reaches the SIEM, organizations can stay on budget and still ensure that all necessary data can be thoroughly examined. Data pipelines also lead to tangible reductions in both storage and processing expenses. ... The decrease in the sheer volume of data that the SIEM must handle directly can significantly reduce the total cost of SIEM operations. In addition to volume reduction, data pipelines improve the quality of data delivered to SIEMs and other tools — filtering out repetitive noise and enriching logs for faster queries, increased relevance, and prioritization of the most critical security events. Data pipelines also introduce efficiency by automating the collection, processing, and routing of data. By reducing alert fatigue through intelligent anomaly detection and prioritization, data pipelines can significantly speed up incident resolution times. Beyond immediate threat detection and cost savings, data pipelines also aid in maintaining compliance with privacy regulations like GDPR, CCPA, and PCI. They help provide clear data lineage, making it easier to track the origin and transformations of data. 


Why you need diverse third-party data to deliver trusted AI solutions

Data diversity refers to the variety and representation of different attributes, groups, conditions, or contexts within a dataset. It ensures that the dataset reflects the real-world variability in the population or phenomenon being studied. The diversity of your data helps ensure that the insights, predictions, and decisions derived from it are fair, accurate, and generalizable. ... Before you start your data analysis, it’s important to understand what you want to do with your data. A keen understanding of your use cases and data applications can help identify gaps and hypotheses you need to work to solve. It also gives you a method for seeking the data that fits your specific use case. In the same way, starting with a clear question provides direction, focus, and purpose to the whole process of text data analysis. Without one, you’ll inevitably gather irrelevant data, overlook key variables, or find yourself looking at a dataset that’s irrelevant to what you actually want to know. ... When certain voices, topics, or customer segments are over- or underrepresented in the data, models trained on that data may produce skewed results: misunderstanding user needs, overlooking key issues, or favoring one group over another. This can result in poor customer experiences, ineffective personalization efforts, and biased decision-making. 

Daily Tech Digest - January 26, 2025


Quote for the day:

“If you don’t try at anything, you can’t fail… it takes back bone to lead the life you want” -- Richard Yates

Here’s Why Physical AI Is Rapidly Gaining Ground And Lauded As The Next AI Big Breakthrough

If we are going to connect generative AI to all kinds of robots and other machines that are wandering around in our homes, offices, factories, streets, and the like, we ought to expect that the AI will do so properly, safely, and with aplomb. Can an AI that only has text-based data training adequately control and direct those real-world machines as they mix among people? Some assert that this is a highly dangerous concern. The generative AI uses ostensibly book learning to guess what will happen when a robot is instructed by the AI to lift a chair or hold aloft a dog. Is that good enough to cope with the myriad of aspects that can go wrong? Perhaps the AI will by text-basis logic assume that if the dog is dropped, it will bounce like a rubber ball. Ouch, the dog might not be amused. ... AI researchers are scurrying to craft Physical AI. The future depends on this capability. Machines and robots are going to be built and shipped to work side-by-side with humans. Physical AI will be the make-or-break of whether those mechanizations are compatible with humans and operate properly in the real world or instead are endangering and harmful.


Why workload repatriation must be part of true multi-cloud strategies

Repatriation can provide benefits such as cost optimization and enhanced control, but it also introduces significant challenges. Key obstacles organizations encounter during cloud repatriation include the absence of cloud-native services, limited access to provider-managed applications, the need for highly skilled professionals, and potentially substantial capital expenditures required for building or upgrading on-premises infrastructure. Migrating workloads back on-premises often results in the development of hybrid environments or, in cases where multiple public cloud providers are used, multi-cloud environments. This shift adds complexity to managing IT infrastructure, requiring greater coordination and expertise. In public cloud environments, providers offer a wide array of managed services, automated management, and orchestration capabilities that simplify operations and reduce the burden on IT teams. When repatriating workloads, organizations must find alternatives or develop in-house solutions to replicate these functionalities. This can be time-consuming, costly, and may result in reduced capabilities compared to cloud-native offerings. As such, organizations must carefully balance the trade-offs between the advanced capabilities of cloud-native solutions and the control offered by on-premises environments. 


3 hidden benefits of Dedicated Internet Access for enterprises

DIA is designed to support bandwidth-heavy tasks such as cloud-based applications and video conferencing. It ensures seamless connectivity, helping streamline operations and prevent performance issues. Routine activities like large file sharing, backups, and data transfers are completed more efficiently, while internal communication across multiple business locations becomes smoother and more reliable. Think of DIA as your business’s private Internet highway. Unlike shared connections, it provides uninterrupted service, essential for maintaining optimal workflows and boosting productivity. For companies that rely on consistent and high-performance Internet access, DIA offers a dependable solution tailored to meet these demands. ... Fast website loading times and smooth online transactions are essential for satisfying customers. DIA helps businesses deliver a premium online experience, which can significantly improve customer loyalty. This reliable performance extends to all business locations, including branch offices. With DIA, businesses can ensure consistent, high-quality interactions with their customers—whether accessing resources or reaching out through support channels. Additionally, DIA enhances customer support by ensuring messaging services remain continuously available, allowing businesses to respond quickly and efficiently to customer needs.


Data engineering - Pryon: Turning chaos into clarity

Data Engineering is the discipline that takes raw, unstructured data and transforms it into actionable, high-value insights. Without a strong data foundation, the $10M average that 1 in 3 enterprises are spending on AI projects next year alone, are setting themselves up for failure. As data creation accelerates – 90% of the world’s data has been generated in the last two years – engineers are tasked with more than just managing it. They have to structure, organise and operationalise data so it can actually be useful and produce the right outputs. From building reliable pipelines to ensuring data quality, engineering teams play the central role in making systems that actually solve problems. ... Data synthesis is interesting, but taking action is paramount. The final step is putting it to work. Whether that means automating workflows, making real-time decisions, or delivering predictive insights, this is where the rubber meets the road. Agentic orchestration can enable systems to take the synthesised insights and act on them autonomously or with minimal human input. These engines bridge the gap between theory and practice, ensuring that your data doesn’t just sit idle – it drives measurable outcomes.


Leading with purpose: Insights from the Bhagavad Gita for modern managers

In a professional setting, the ability to manage emotions is crucial for success. A manager or individual who seeks gratification of ego and cannot regulate their emotions is likely to face challenges in achieving results. Actions driven by a sense of false ego can lead to conflicts, and misunderstandings, and ultimately hinder productivity. Such individuals may react impulsively rather than thoughtfully, allowing their emotions to cloud their judgment. When individuals learn to regulate their emotions and act from a place of calmness rather than chaos, they not only enhance their performance but also uplift those around them. A Sattvic approach to work fosters collaboration, creativity, and a shared sense of purpose. Conversely, when actions are driven by ego or excessive ambition (Tamsik), they often lead to stress and burnout. By embodying the teachings of the Gita—performing duties with dedication while remaining unattached to outcomes—individuals can achieve true mastery over their emotions. This mastery not only paves the way for personal success but also cultivates an environment where everyone can thrive together. While the entire Bhagavad Gita is replete with invaluable life lessons, these two shlokas stand out as particularly essential for effective management in the workplace. 


Accelerating HCM Cloud Implementation With RPA

Robotic Process Automation (RPA) provides a practical solution to streamline these processes. ... Many cloud platforms require Multi-Factor Authentication (MFA), which disrupts standard login routines for bots. However, we have addressed this by programmatically enabling RPA bots to handle MFA through integration with SMS or email-based OTP services. This allows seamless automation of login processes, even with additional security layers. ... It’s essential that users are assigned the correct authorizations in an HCM cloud, with ongoing maintenance of these permissions as individuals transition within the organization. Even with a well-defined scheme in place, it’s easy for someone to be shifted into a role that they shouldn’t hold. To address this challenge, we have leveraged RPA to automate the assignment of roles, ensuring adherence to least-privilege access models. ... Integrating with HCM systems through APIs often involves navigating rate limits that can disrupt workflows. To address this challenge, we implemented robust retry logic within our RPA bots, utilizing exponential backoff to gracefully handle API rate limit errors. This approach not only minimizes disruptions but also ensures that critical operations continue smoothly.


MDM and genAI: A match made in Heaven — or something less?

Despite its promising potential, AIoT faces several hurdles. One major challenge is interoperability. Many companies use IIoT devices and platforms from different manufacturers, which are not always seamlessly compatible. This complicates the implementation of integrated AIoT solutions and necessitates standardised interfaces and protocols. IIoT platforms such as Cumulocity can integrate various services and devices. A well-chosen platform facilitates the integration of new devices, enables easy scaling, and supports the flexible adaptation of an IIoT strategy. It also allows integration with other systems and technologies, such as ERP or CRM systems, thereby embedding IIoT technologies into existing business processes. Moreover, robust platforms offer specialised security features to protect connected devices from potential cybercriminal attacks. Another critical aspect is data preparation. In IoT environments, data quality is often poorer than businesses assume. Applying AI to inadequately prepared data produces subpar models that fail to deliver expected results. ... A further challenge is the skills shortage. Developing and implementing AIoT systems requires expertise in fields such as data analysis, machine learning, and cybersecurity. The demand for skilled professionals exceeds current supply, prompting companies to invest in training and development programmes.


Enterprise Architecture and Complexity

Complex architectures are characterised by attributes that make it challenging to manage using traditional project or program management methods. These architectures often have many layers, interconnected parts, variables, and dynamics that are not immediately apparent or easily understood. Complex architectures are also unpredictable (Theiss 2023)2 due to the communication and interaction required across and between the components. Managing an architecture build and deployment requires both broad and deep understanding of the interdependencies, interactions, and inherent constraints. As increasing levels of automation are deployed at scale, greater visibility and transparency is needed to understand not only the technologies and applications in play, but also the intended and unintended consequences and behaviour that they generate. Architectural artefacts and systems documentation (even if up to date) typically show elements such as nested operational processes as simple, generalised linkages and design patterns which results in greater levels of ambiguity, not clarity. They only allow us to understand in part. As systems architectures become more complex in build, capability and scope, enhanced sense-making capabilities are needed to navigate components, to ensure a coherent, adaptive systems design. 


Misinformation Is No. 1 Global Risk, Cyberespionage in Top 5

Misinformation campaigns in the form of deepfakes, synthetic voice recordings or fabricated news stories are now a leading mechanism for foreign entities to influence "voter intentions, sow doubt among the general public about what is happening in conflict zones, or tarnish the image of products or services from another country." This is especially acute in India, Germany, Brazil and the United States. Concern remains especially high following a year of the so-called "super elections," which saw heightened state-sponsored campaigns designed to manipulate public opinion.  ... Despite growing concerns, cyber resilience continues to be inadequate especially among small and mid-sized organizations, according to the report's findings. Thirty-five percent of small organizations believe their cyber resilience is inadequate, up from 5% in 2022. Many of these organizations lack the resources to invest in advanced cybersecurity measures, leaving them increasingly vulnerable to ransomware, phishing and other attacks. Seventy-one percent of cyber leaders say small organizations have already reached a "tipping point where they can no longer adequately secure themselves against the growing complexity of cyber risks." ... On one hand, AI-powered systems are proving invaluable in identifying threats, automating responses and analyzing vast amounts of data in real time.


Cloud repatriation – how to balance repatriation effectively and securely

Regardless of the reasons for making the move away from public cloud, the road to repatriation can be complex to navigate. Whether it is technical or talent issues, financial costs or compliance challenges, businesses making the switch should be prepared to spend time planning and executing an effective strategy. Within this strategy there are three areas that require special attention: observability, compliance and employing a holistic tech stack strategy. Observability is crucial in cloud repatriation because in order to move data and applications in-house, a business must understand them and how they are being used. It is only then you can ensure a smooth and effective transition. For example, there might be Shadow IT or AI that is being used by employees to get around IT policy and help them to get their work done faster. Sometimes these technologies will store data on a cloud service, so businesses need to be aware of them before making the switch. By leveraging observability, organizations can mitigate risks, optimize their infrastructure, and achieve successful repatriation that meets their strategic objectives. Compliance is also important as it is a major focus area for European and UK regulators with new and emerging regulations like DORA and NIS2 coming to the fore.


Daily Tech Digest - December 16, 2024

What IT hiring looks like heading into 2025

AI isn’t replacing jobs so much as it is reshaping the nature of work, said Elizabeth Lascaze, a principal in Deloitte Consulting’s Human Capital practice. She, too, sees evidence that entry-level roles focused on tasks like note-taking or basic data analysis are declining as organizations seek more experienced workers for junior positions. “Today’s emerging roles require workers to quickly leverage data, generate insights, and solve problems,” she said, adding that those skilled in using AI, such as cybersecurity analysts applying AI for threat detection, will be highly sought after. Although the adoption of AI has led to some “growing pains,” many workers are actually excited about it, Lascaze said, with most employees believing it will create new jobs and enhance their careers. “Our survey found that just 24% of early career workers and 14% of tenured workers fear their jobs will be replaced by AI,” Lascaze said. “Tenured workers are more likely to lead organizational strategy, so they may prioritize AI’s potential to improve efficiency, sophistication, and work quality in existing roles rather than AI’s potential to eliminate certain positions. “These workers reported being slightly more focused on building AI fluency than early-career employees,” Lascaze said. 


The Future of AI (And Travel) Relies on Synthetic Data

Synthetic data enhances accuracy and fairness in AI models as organic data can be biased or unbalanced, leading to ML models failing to represent diverse populations accurately. With synthetic data, researchers can create datasets that more accurately reflect the demographics they intend to serve, thereby minimizing biases and improving overall model robustness. ... Synthetic data can be a double-edged sword. While it addresses data privacy and availability challenges, it can inadvertently carry or magnify biases embedded in the original dataset. When source data is flawed, those imperfections can cascade into the synthetic version, skewing results — a critical concern in high-stakes domains like healthcare and finance, where precision and fairness are paramount. To counteract this, having a human in the loop is super important. While there’s a temptation to use synthetic data to fill in every gap for better accuracy and fairness, we understood that running synthetic searches for every flight combination possible globally for our price tracking and predictions feature could overwhelm our booking system and impact real travelers organically searching for flights. Synthetic data has limitations that go beyond bias. 


9 Cloud Service Adoption Trends

Most organizations are building modern cloud computing applications to enable greater scalability while reducing cost and consumption costs. They’re also more focused on the security and compliance of cloud systems and how providers are validating and ensuring data protection. “Their main focus is really around cost, but a second focus would be whether providers can meet or exceed their current compliance requirements,” says Will Milewski, SVP of cloud infrastructure and operations at content management solution provider Hyland. ... There’s a fundamental shift in cloud adoption patterns, driven largely by the emergence of AI and ML capabilities. Unlike previous cycles focused primarily on infrastructure migration, organizations are now having to balance traditional cloud ROI metrics with strategic technology bets, particularly around AI services. According to Kyle Campos, chief technology and product officer at cloud management platform provider CloudBolt Software, this evolution is being catalyzed by two major forces: First, cloud providers are aggressively pushing AI capabilities as key differentiators rather than competing on cost or basic services. Second, organizations are realizing that cloud strategy decisions today have more profound implications for future innovation capabilities than ever before.


We’ve come a long way from RPA: How AI agents are revolutionizing automation

As the AI ecosystem evolves, a significant shift is occurring toward vertical AI agents — highly specialized AI systems designed for specific industries or use cases. As Microsoft founder Bill Gates said in a recent blog post: “Agents are smarter. They’re proactive — capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. “ Unlike traditional software-as-a-service (SaaS) models, vertical AI agents do more than optimize existing workflows; they reimagine them entirely, bringing new possibilities to life. ... The most profound shift in the automation landscape is the transition from RPA to multi-agent AI systems capable of autonomous decision-making and collaboration. According to a recent Gartner survey, this shift will enable 15% of day-to-day work decisions to be made autonomously by 2028. These agents are evolving from simple tools into true collaborators, transforming enterprise workflows and systems. ... As AI agents progress from handling tasks to managing workflows and entire jobs, they face a compounding accuracy challenge. Each additional step introduces potential errors, multiplying and degrading overall performance. 


8 reasons why digital transformations still fail

“People got really excited about, ‘We’re going to transform,’” Woerner says, but she believes part of the problem lies with leaders who “didn’t have the discipline to make the hard choices early on” to get employee buy-in. Ranjit Varughse, CIO of automotive paint and equipment firm Wesco Group, agrees. “The first challenge is getting digital transformation buy-in from teams at the outset. People are creatures of habit, making many hesitant to change their existing systems and processes,” he says. “Without a clear change management strategy to get a team aligned, ERP implementations in particular can be slow, stall, or even fail entirely.” ... Digital transformation isn’t a technology problem, it’s about understanding how people actually work, not how we think they should work, Wei says. “At PropertySensor, we scrapped our first version after realizing real estate agents needed mobile-first solutions, not desktop dashboards,” he says. ... “People, process, and technology” is a common phrase technology leaders use when discussing the critical elements of a transformation. “But the real focus should be people, people, people,” echoes Megan Williams, vice president of global technology strategy and transformation at TransUnion.


How companies can address bias and privacy challenges in AI models

Companies understand that AI adoption is existential to their survival, with the winners of tomorrow being determined by their ability to harness AI effectively. Furthermore, they understand that their brand’s reputation is one of their most valuable assets. Missteps with AI—especially in mission-critical contexts (think of a trading algorithm going awol, a breach of user privacy, or a failure to meet safety standards)—can erode public trust and harm a company’s bottom line. This could have dire consequences. With a company’s competitiveness and potentially its very survival at stake, AI governance becomes a business imperative that they cannot afford to ignore. ... Certainly, we see a lot of activity from the government – both at the state and federal levels – which is creating a fragmented approach. We also see leading companies who understand that adopting AI is crucial to their future and want to move fast. They are not waiting for the regulatory environment to settle and are taking a leadership position in adopting responsible AI principles to safeguard their brand reputations. So, I believe companies will act intelligently out of self-interest to accelerate their AI initiatives and increase business returns. 


Ensuring AI Accountability Through Product Liability: The EU Approach and Why American Businesses Should Care

In terms of a substantive law regulating AI (which can be the basis of the causality presumption under the proposed AI Liability Directive), the European Union’s Artificial Intelligence Act (AI Act) entered into force on August 1, 2024, becoming the first comprehensive legal framework for AI globally. The AI Act applies to providers and developers of AI systems that are marketed or used within the EU (including free-to-use AI technology), regardless of whether those providers or developers are established in the EU or a separate country. The EU AI Act sets forth requirements and obligations for developers and deployers of AI systems in accordance with risk-based classification system and a tiered approach to governance, which are two of the most innovative features of the AI Act. The Act classifies AI applications into four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk. AI systems deemed to pose an unacceptable risk, such as those that violate fundamental rights, are outright banned. ... High-risk AI systems, which include areas such as health care, law enforcement, and critical infrastructure, will face stricter regulatory scrutiny and must comply with rigorous transparency, data governance, and safety protocols. 


Agentic AI is evolving into specialised assistants, enabling the workforce to focus on value-adding tasks

A structured discovery approach is required to identify high impact areas for AI adoption rather than siloed use-cases. Infosys Topaz comprises verticalised blueprints, industry catalogues and strategic AI value map analysis capabilities. We have created playbooks for industries that lay out a structured roadmap to embed and mature GenAI into core processes and operations and across the IT landscape. This includes the right use-cases across the value stream spanning operations, customer experience, research and development, etc. As part of our Responsible AI by Design approach, we implement robust technical and process guardrails to ensure privacy and security. These include impact assessments, audits, automated policy enforcement, monitoring tools, and runtime safeguards to filter inputs and outputs for generative AI. We also use red-teaming and advanced testing tools to identify vulnerabilities and fortify AI models. Additionally, we employ privacy-preserving techniques such as Homomorphic Encryption and Secure Multi-Party Computation to enhance the security and resilience of our AI solutions. ... AI-driven monitoring tools detect inefficiencies in IT infrastructure, leveraging predictive analytics and forecasting techniques to improve utilisation in real time.


Security leaders top 10 takeaways for 2024

One of the most significant new rules, which has received the lion’s share of press attention, is the ‘materiality’ component, or the need to report “material” cybersecurity incidents to the SEC within four business days of discovery. At issue is whether the incident led to significant risk to the organization and its shareholders. If so, it’s defined as material and must be reported within four days of this determination being made (not its initial discovery). “Materiality extends beyond quantitative losses, such as direct financial impacts, to include qualitative aspects, like reputational damage and operational disruptions,” he says. McGladrey says the SEC’s materiality guidance underscores the importance of investor protection in relation to cybersecurity events and, if in doubt, the safest path is reporting. “If a disclosure is uncertain, erring on the side of transparency safeguards shareholders,” he tells CSO. ... As a virtual or fractional CISO service, Sage has observed startups engaging vCISO services earlier, in pre-seed and Series A stage and, in some cases, before they’ve finalized their minimum viable product. “Small technology consulting and boutique software development groups are looking for ISO 27001 certifications to ensure they can continue serving their larger customers,” she tells CSO.


Emotional intelligence in IT management: Impact, challenges, and cultural differences

While delivering results is the primary goal of any leader, you can’t forget that you’re managing people, not machines. Emotional intelligence helps balance the need for productivity with fairness and empathy. One way to illustrate this balance is through handling difficult conversations about career moves. Managing a team of over 100 support specialists for several years gave me the opportunity to conduct an interesting experiment. Many employees tend to hide the fact that they are exploring job opportunities elsewhere until the last minute. This creates unnecessary tension and can lead to higher turnover. However, if a manager removes the stigma around job interviews and treats them as part of market research, it encourages open communication. ... Emotionally intelligent managers possess the ability to identify the core of a conflict without letting it escalate. Attempting to gather every single piece of information is not always helpful. Instead, managers should focus on resolving conflicts, as often the solution is already within the team. This does not mean conducting surveys or asking for feedback from each person, as delicate situations require a more refined approach. A manager should observe, analyze, and extract the most significant points quickly and intuitively, enabling conflict resolution before it grows into a larger issue.



Quote for the day:

“Things come to those who wait, but only the things left by those who hustle” -- Abraham Lincoln