Showing posts with label service as software. Show all posts
Showing posts with label service as software. Show all posts

Daily Tech Digest - October 07, 2025


Quote for the day:

"There is only one success – to be able to spend your life in your own way." -- Christopher Morley



5 Critical Questions For Adopting an AI Security Solution

An AI-SPM solution must be capable of seamless AI model discovery, creating a centralized inventory for complete visibility into deployed models and associated resources. This helps organizations monitor model usage, ensure policy compliance, and proactively address any potential security vulnerabilities. By maintaining a detailed overview of models across environments, businesses can proactively mitigate risks, protect sensitive data, and optimize AI operations. ... An effective AI-SPM solution must tackle risks that are specific to AI systems. For instance, it should protect training data used in machine learning workflows, ensure that datasets remain compliant under privacy regulations, and identify anomalies or malicious activities that might compromise AI model integrity. Make sure to ask whether the solution includes built-in features to secure every stage of your AI lifecycle—from data ingestion to deployment. ... When evaluating an AI-SPM solution, ensure that it automatically maps your data and AI workflows to governance and compliance requirements. It should be capable of detecting non-compliant data and providing robust reporting features to enable audit readiness. Additionally, features like automated policy enforcement and real-time compliance monitoring are critical to keeping up with regulatory changes and preventing hefty fines or reputational damage.


The architecture of lies: Bot farms are running the disinformation war

As bots become more common and harder to tell from real users, people start to lose confidence in what they see online. This creates the liars dividend, where even authentic content is questioned simply because everyone knows fakes are out there. If any critical voice or inconvenient fact can be dismissed as just a bot or a deepfake, democratic debate takes a hit. AI-driven bots can also create the illusion of consensus. By making a hashtag or viewpoint trend, they create the impression that everyone is talking about it, or that an extreme position enjoys broader support than it appears to have.  ... It’s still an open question how well online platforms stop malicious, bot-driven content, even though they are the ones responsible for policing their own networks. Harmful AI bots continue to get through the defenses of major social media platforms. Even though most have rules against automated manipulation, enforcement is weak and bots exploit the gaps to spread disinformation. Current detection systems and policies aren’t keeping up, and platforms will need stronger measures to address the problem. ... The EU and the US are both moving to address bot-driven disinformation. In the EU, the Digital Services Act obliges large online platforms to assess and mitigate systemic risks such as manipulation, and to provide vetted researchers with access to platform data.


Is the CISO chair becoming a revolving door?

“A CISO is interacting with a lot of interfaces, and you need to have soft skills and communicate well with others. In many cases, you need to drive others to take action, and that’s super tedious. It’s very difficult to keep doing it over time,” Geiger Maor says. “In many cases, you’re in direct conflict with company goals and your goals. You’re like a salmon fish going upstream against everybody else. This makes it very difficult to keep a long tenure.” ... That constant exposure to risk and blame is another reason some CISOs hesitate to take the role in the first place, according to Rona Spiegel, senior manager, security and trust, mergers and acquisitions at Autodesk and former cloud governance leader at Wells Fargo and Cisco. “The bad guys, especially now with AI and automation, they’re getting more sophisticated, and they only have to be right once, but the CISO has to be right all day every day. They only have to be wrong once, and they get blamed … you’re an operational cost centre no matter what because you’re not bringing in revenue, so if something goes wrong … all roads lead to the CISO,” Spiegel says. ... Chapman is also seeing a rise in fractional CISOs, brought in part-time to set up frameworks or oversee specific projects. “It really comes down to the individual,” he says. “Some want that top seat, speaking to the board, communicating risk. But I am also seeing some say, ‘It doesn’t have to be a CISO role.’”


RPA versus hyperautomation: Understanding accuracy (performance) benchmarks in practice

RPA is like that reliable coworker who never complains and does exactly what you ask. It loves repetitive, predictable tasks such as copying and pasting data, moving files between systems or generating standard reports. When everything goes according to plan, RPA is perfect. ... Hyperautomation is the next-level upgrade. It combines RPA with AI, natural language processing (NLP), intelligent document processing (IDP), process mining and workflow orchestration. In simple terms, it doesn’t just follow rules. It learns, adapts and keeps things moving even when the world throws curveballs. With hyperautomation, processes that would have stopped RPA cold continue without a hitch. ... RPA and Hyperautomation are not rivals. They are more like teammates with different strengths. RPA shines when tasks are stable and repetitive, quietly doing its job without fuss. Hyperautomation brings in intelligence, flexibility and the ability to handle entire processes from start to finish. When applied thoughtfully, hyperautomation cuts down on manual corrections, handles exceptions smoothly and delivers value at scale. All this happens without the IT team needing to hire extra coffee runners to fix errors or babysit the robots. The real goal is to build automation that works at the process level, adapts to change and keeps running even when things go off script.


The pros and cons of AI coding in the IT industry

Although now being used by the majority of programmers, AI tools were not universally welcomed upon their launch, and it has taken time to move beyond the initial doubts and suspicion surrounding generative AI. It’s important to note that risks remain when using AI-generated code, which organizations will have to mitigate. “Integrating AI into our coding processes was initially met with skepticism, both within our organization and across the industry,” Jain explains. “Concerns included AI's ability to comprehend complex codebases, the potential for generating buggy code, adherence to company standards, and issues surrounding code and data privacy.” However, since the launch of the first generative AI tools at the end of 2022, Jain says that the rapid evolution of AI technology’s implementation has alleviated many concerns, with features such as codebase indexing and secure training protocols addressing major concerns. “These advancements have enabled AI tools to understand code context, follow company standards, and maintain robust security measures,” Jain tells ITPro. Nevertheless, security and accountability are also major factors for any IT company to consider when looking to use AI as part of the development process, and research continues to show glaring vulnerabilities in AI code. There are certain steps that simply can’t be replaced by AI.


Why AI Is Forcing an Invisible Shift in Risk Management

Without the need for complex, technical coding knowledge, there are increasingly more departments within a business capable of driving and contributing to the development lifecycle, forcing a shift from centralized innovation to development that is fractalized across the entire organization. This shift has been revolutionary, driving more lucrative development by empowering technical teams and business leaders to align on goals and work hand-in-hand. Still, this transition has changed the organization’s relationship with risk. ... In the age of distributed application building, organizations have to raise more questions as it relates to governance and risk, which can mean many different things depending on where the technology sits in the business. Is the application going to be customer-facing? How sensitive is the data? How should it be stored? What are some other privacy considerations? These are all questions businesses must ask in the age of fractured development — and the answers will vary from case to case. ... The shift to decentralized development is not the first change technology has seen, and it’s certainly not the last. The key to staying ahead of the curve is paying attention to the invisible shifts that come with these disruptions, such as the changes that have recently come with the adoption of AI and low code. As these technologies reimagine the typical risk management and compliance model, it’s important for businesses to come to terms with adaptive governance and react as such.


How cross-functional teams rewrite the rules of IT collaboration

When done right, IT isn’t just an optional part of cross-functional collaboration, it’s an integral part of what makes collaboration possible. “There’s a lot of overlap now between IT, sales, finance and regulatory compliance,” says George Dimov, managing owner of Dimov Tax. ... What happens when IT plays a key role in breaking down barriers? First, getting IT involved in cross-functional teams means IT is at the table from day one. Rather than having an environment where a department requests a report or tool from IT after the fact, or has it digitize information later on, IT is present in all meetings. As more organizations recognize the inherent importance of digital transformation, the need for IT expertise — including perspectives from individuals with different types of IT experience — becomes more pronounced. It’s up to the CIO to provide the cross-functional leadership that ensures IT is involved in such efforts from the start. ... Even in situations when IT isn’t directly involved in day-to-day collaboration, it can still play a valuable role by providing technology resources that aid and facilitate collaboration. Ideally, IT should be part of the solution to eliminate barriers, whether that’s through digital sharing tools, reporting mechanisms, or something else. IT can and should be at the forefront of enabling cross-functional collaboration between teams and departments.


Service-as-software: The new control plane for business

Historically, enterprises ran on islands of automation — enterprise resource planning for the back office and, later, a proliferation of apps. Customer relationship management was the first to introduce a new operating model and a new business model. Today, the enterprise itself must begin to operate like a software company. That requires harmonizing those islands into a single unified layer where data and application logic collapse into an integrated System of Intelligence. Agents rely on this harmonized context to make decisions and, when needed, invoke legacy applications to execute workflows. Operating this way also demands a new operations model: a build-to-order assembly line for knowledge work that blends the customization of consulting with the efficiency of high-volume fulfillment. Humans supervise agents, and in doing so progressively encode their expertise into the system. ... The important point to remember is that islands of automation impede management’s core function – planning, resource allocation and orchestration with full visibility across levels of detail and business domains. Data lakes do not solve this by themselves; each star schema is another island. Near-term, organizations can start small and let agents interrogate a single domain (for example, the sales cube) and take limited actions by calling systems of record via MCP servers, for example, viewing a customer’s complaints and initiating a return authorization.


Companies are making the same mistake with AI that Tesla made with robots

Shai Ahrony, CEO of marketing agency Reboot Online, calls this phenomenon the "AI aftershock." "Companies that rushed to cut jobs in the name of AI savings are now facing massive, and often unexpected costs," he told ZDNET. "We've seen customers share examples of AI-generated errors -- like chatbots giving wrong answers, marketing emails misfiring, or content that misrepresents the brand -- and they notice when the human touch is missing." ... Some companies have already learned painful lessons about AI's shortcomings and adjusted course accordingly. In one early example from last year, McDonald's announced that it was retiring an automated order-taking technology that it had developed in partnership with IBM after the AI-powered system's mishaps went viral across social media. ... McDonalds' and Klarna's decisions to backtrack on AI in favor of humans is reminiscent of a similar about-face from Tesla. In 2018, after Tesla failed to meet production quotas for its Model 3, CEO Elon Musk admitted in a tweet that the electric vehicle company's reliance upon "excessive automation…was a mistake." "Humans are underrated," he added. Businesses aggressively pushing to deploy AI-powered customer service initiatives in the present could come to a similar conclusion: that even though the technology helps to cut spending and boost efficiency in some domains, it isn't able to completely replicate the human touch.


How Can the Usage of AI Help Boost DevOps Pipelines

In recent times, AI is playing a key role in CI/CD by using machine learning algorithms and intelligent automation to detect errors proactively, optimize resource usage and faster release cycles. With AI, CI/CD pipelines can learn, adapt and optimize themselves, redefining software development from start to finish. By combining AI and DevOps, you can eliminate silos, recover faster from outages and open up new business revenue streams. Today’s businesses are increasingly leveraging artificial intelligence capabilities throughout their DevOps pipelines to make their CI/CD pipelines intelligent, thereby enabling them to predict problems faster, optimize the pipelines if needed, and recover from failures without the need for any human intervention. ... When you adopt AI into the DevOps practices in your organization, you are applying specific technologies to automate, optimize, and enhance each stage of the software development lifecycle – coding, testing, deployment, and monitoring. Today’s organizations are using AI in their DevOps pipelines to drive innovation, enabling teams to work seamlessly and achieve rapid development and deployment cycles. ... AI can help in DevSecOps in ways such as automating security testing, automating threat detection, and streamlining incident response. You can use AI-powered tools to scan your application source code for security vulnerabilities, automate software patches, automate incident responses, and monitor in real-time to identify anomalies.

Daily Tech Digest - August 18, 2025


Quote for the day:

"The ladder of success is best climbed by stepping on the rungs of opportunity." -- Ayn Rand


Legacy IT Infrastructure: Not the Villain We Make It Out to Be

Most legacy infrastructure consists of tried-and-true solutions. If a business has been using a legacy system for years, it's a reliable investment. It may not be as optimal from a cost, scalability, or security perspective as a more modern alternative. But in some cases, this drawback is outweighed by the fact that — unlike a new, as-yet-unproven solution — legacy systems can be trusted to do what they claim to do because they've already been doing it for years. The fact that legacy systems have been around for a while also means that it's often easy to find engineers who know how to work with them. Hiring experts in the latest, greatest technology can be challenging, especially given the widespread IT talent shortage. But if a technology has been in widespread use for decades, IT departments don't need to look as hard to find staff qualified to support them. ... From a cost perspective, too, legacy systems have their benefits. Even if they are subject to technical debt or operational inefficiencies that increase costs, sticking with them may be a more financially sound move than undertaking a costly migration to an alternative system, which may itself present unforeseen cost drawbacks. ...  As for security, it's hard to argue that a system with inherent, incurable security flaws is worth keeping around. However, some IT systems can offer security benefits not available on more modern alternatives. 


Agentic AI promises a cybersecurity revolution — with asterisks

“If you want to remove or give agency to a platform tool to make decisions on your behalf, you have to gain a lot of trust in the system to make sure that it is acting in your best interest,” Seri says. “It can hallucinate, and you have to be vigilant in maintaining a chain of evidence between a conclusion that the system gave you and where it came from.” ... “Everyone’s creating MCP servers for their services to have AI interact with them. But an MCP at the end of the day is the same thing as an API. [Don’t make] all the same mistakes that people made when they started creating APIs ten years ago. All these authentication problems and tokens, everything that’s just API security.” ... CISOs need to immediately strap in and grapple with the implications of a technology that they do not always fully control, if for no other reason than their team members will likely turn to AI platforms to develop their security solutions. “Saying no doesn’t work. You have to say yes with guardrails,” says Mesta. At this still nascent stage of agentic AI, CISOs should ask questions, Riopel says. But he stresses that the main “question you should be asking is: How can I force multiply the output or the effectiveness of my team in a very short period of time? And by a short period of time, it’s not months; it should be days. That is the type of return that our customers, even in enterprise-type environments, are seeing.”


Zero Trust: A Strong Strategy for Secure Enterprise

Due to the increasing interconnection of operational changes affecting the financial and social health of digital enterprises, security is assuming a more prominent role in business discussions. Executive leadership is pivotal in ensuring enterprise security. It’s vital for business operations and security to be aligned and coordinated to maintain security. Data governance is integral in coordinating cross-functional activity to achieve this requirement. Executive leadership buy-in coordinates and supports security initiatives, and executive sponsorship sets the tone and provides the resources necessary for program success. As a result, security professionals are increasingly represented in board seats and C-suite positions. In public acknowledgment of this responsibility, executive leadership is increasingly held accountable for security breaches, with some being found personally liable for negligence. Today, enterprise security is the responsibility of multiple teams. IT infrastructure, IT enterprise, information security, product teams, and cloud teams work together in functional unity but require a sponsor for the security program. Zero trust security complements operations due to its strict role definition, process mapping, and monitoring practices, making compliance more manageable and automatable. Whatever the region, the trend is toward increased reporting and compliance. As a result, data governance and security are closely intertwined.


The Role of Open Source in Democratizing Data

Every organization uses a unique mix of tools, from mainstream platforms such as Salesforce to industry-specific applications that only a handful of companies use. Traditional vendors can't economically justify building connectors for niche tools that might only have 100 users globally. This is where open source fundamentally changes the game. The math that doesn't work for proprietary vendors, where each connector needs to generate significant revenue, becomes irrelevant when the users themselves are the builders. ... The truth about AI is that it isn’t about using the best LLMs or the most powerful GPUs. The real truth is that AI is only as good as the data it ingests. I've seen Fortune 500 companies with data locked in legacy ERPs from the 1990s, custom-built internal tools, and regional systems that no vendor supports. This data, often containing decades of business intelligence, remains trapped and unusable for AI training. Long-tail connectors change this equation entirely. When the community can build connectors for any system, no matter how obscure, decades of insights can be unlocked and unleashed. This matters enormously for AI readiness. Training effective models requires real data context, not a selected subset from cloud native systems incorporated just 10 years ago. Companies that can integrate their entire data estate, including legacy systems, gain massive advantages. More data fed into AI leads to better results.


7 Terrifying AI Risks That Could Change The World

Operating generative AI language models requires huge amounts of compute power. This is provided by vast data centers that burn through energy at rates comparable to small nations, creating poisonous emissions and noise pollution. They consume massive amounts of water at a time when water scarcity is increasingly a concern. Critics of the idea that the benefits of AI are outweighed by the environmental harm it causes often believe that this damage will be offset by efficiencies that AI will create. ... The threat that AI poses to privacy is at the root of this one. With its ability to capture and process vast quantities of personal information, there’s no way to predict how much it might know about our lives in just a few short years. Employers increasingly monitoring and analyzing worker activity, the growing number of AI-enabled cameras on our devices, and in our streets, vehicles and homes, and police forces rolling out facial-recognition technology, all raise anxiety that soon no corner will be safe from prying AIs. ... AI enables and accelerates the spread of misinformation, making it quicker and easier to disseminate, more convincing, and harder to detect from Deepfake videos of world leaders saying or doing things that never happened, to conspiracy theories flooding social media in the form of stories and images designed to go viral and cause disruption. 


Quality Mindset: Why Software Testing Starts at Planning

In many organizations, quality is still siloed, handed off to QA or engineering teams late in the process. But high-performing companies treat quality as a shared responsibility. The business, product, development, QA, release, and operations teams all collaborate to define what "good" looks like. This culture of shared ownership drives better business outcomes. It reduces rework, shortens release cycles, and improves time to market. More importantly, it fosters alignment between technical teams and business stakeholders, ensuring that software investments deliver measurable value. ... A strong quality strategy delivers measurable benefits across the entire enterprise. When teams focus on building quality into every stage of the development process, they spend less time fixing bugs and more time delivering innovation. This shift enables faster time to market and allows organizations to respond more quickly to changing customer needs. The impact goes far beyond the development team. Fewer defects lead to a better customer experience, resulting in higher satisfaction and improved retention. At the same time, a focus on quality reduces the total cost of ownership by minimizing rework, preventing incidents, and ensuring more predictable delivery cycles. Confident in their processes and tools, teams gain the agility to release more frequently without the fear of failure. 


Is “Service as Software” Going to Bring Down People Costs?

Tiwary, formerly of Barracuda Networks and now a venture principal and board member, described the phenomenon as “Service as Software” — a flip of the familiar SaaS acronym that points to a fundamental shift. Instead of hiring more humans to deliver incremental services, organizations are looking at whether AI can deliver those same services as software: infinitely scalable, lower cost, always on. ... Yes, “Service as Software” is a clever phrase, but Hoff bristles at the way “agentic AI” is invoked as if it’s already a settled, mature category. He reminds us that this isn’t some radical new direction — we’ve been on the automation journey for decades, from the codification of security to the rise of cloud-based SOC tooling. GenAI is an iteration, not a revolution. And with each iteration comes risk. Automation without full agency can create as many headaches as it solves. Hiring people who understand how to wield GenAI responsibly may actually increase costs — try finding someone who can wrangle KQL, no-code workflows, and privileged AI swarms without commanding a premium salary. ... The future of “Service as Software” won’t be defined by clever turns of phrase or venture funding announcements. It will be defined by the daily grind of adoption, iteration and timing. AI will replace people in some functions. 


Zero-Downtime Critical Cloud Infrastructure Upgrades at Scale

The requirement for performance testing is mandatory when your system handles critical traffic flow. The first step of every upgrade requires you to collect baseline performance data while performing detailed stress tests that replicate actual workload scenarios. The testing process should include both typical happy path executions and edge cases along with peak traffic conditions and failure scenarios to detect performance bottlenecks. ... Every organization should create formal rollback procedures. A defined rollback approach must accompany all migration and upgrade operations regardless of their future utilization plans. Such a system creates a one-way entry system without any exit plan which puts you at risk. The rollback procedures need proper documentation and validation and should sometimes undergo independent testing. ... Never add any additional improvements during upgrades or migrations – not even a single log line. This discipline might seem excessive, but it's crucial for maintaining clarity during troubleshooting. Migrate the system exactly as it is, then tackle improvements in a separate, subsequent deployment. ... The successful implementation of zero-downtime upgrades at scale needs more than technical skills because it requires systematic preparation and clear communication together with experience-based understanding of potential issues.


The Human Side of AI Governance: Using SCARF to Navigate Digital Transformation

Developed by David Rock in 2008, the SCARF model provides a comprehensive framework for understanding human social behavior through five critical domains that trigger either threat or reward responses in the brain. These domains encompass Status (our perceived importance relative to others), Certainty (our ability to predict future outcomes), Autonomy (our sense of control over events), Relatedness (our sense of safety and connection with others), and Fairness (our perception of equitable treatment). The significance of this framework lies in its neurological foundation. These five social domains activate the same neural pathways that govern our physical survival responses, which explains why perceived social threats can generate reactions as intense as those triggered by physical danger. ... As AI systems become embedded in daily workflows, governance frameworks must actively monitor and support the evolving human-AI relationships. Organizations can create mechanisms for publicly recognizing successful human-AI collaborations while implementing regular “performance reviews” that explain how AI decision-making evolves. Establish clear protocols for human override capabilities, foster a team identity that includes AI as a valued contributor, and conduct regular bias audits to ensure equitable AI performance across different user groups.


How security teams are putting AI to work right now

Security teams are used to drowning in alerts. Most are false positives, some are low risk, only a few matter. AI is helping to cut through this mess. Vendors have been building machine learning models to sort and score alerts. These tools learn over time which signals matter and which can be ignored. When tuned well, they can bring alert volumes down by more than half. That gives analysts more time to look into real threats. GenAI adds something new. Instead of just ranking alerts, some tools now summarize what happened and suggest next steps. One prompt might show an analyst what an attacker did, which systems were touched, and whether data was exfiltrated. This can save time, especially for newer analysts. ... “Humans are still an important part of the process. Analysts provide feedback to the AI so that it continues to improve, share environmental-specific insights, maintain continuous oversight, and handle things AI can’t deal with today,” said Tom Findling, CEO of Conifers. “CISOs should start by targeting areas that consume the most resources or carry the highest risk, while creating a feedback loop that lets analysts guide how the system evolves.” ... Entry-level analysts may no longer spend all day clicking through dashboards. Instead, they might focus on verifying AI suggestions and tuning the system.