Showing posts with label ITOps. Show all posts
Showing posts with label ITOps. Show all posts

Daily Tech Digest - September 18, 2025


Quote for the day:

"When your life flashes before your eyes, make sure you’ve got plenty to watch.” -- Anonymous


The new IT operating model: cloud-managed networking as a strategic lever

Enterprises are navigating an environment where the complexity of IT is increasing exponentially. Hybrid work requires consistent connectivity across homes, offices, and campuses. Edge computing and IoT generate massive volumes of data at distributed sites. Security risks escalate as the attack surface grows. Traditional, hardware-centric approaches leave IT teams struggling to keep up. Managing dozens or hundreds of controllers, patching firmware manually, and troubleshooting issues site by site is not sustainable. Cloud-managed networking changes that equation. By centralizing management, applying AI-driven intelligence, and extending visibility across distributed environments, it enables IT to shift from reactive firefighting to proactive strategy. ... Enterprises adopting cloud-managed networking are making a decisive shift from complexity to clarity. Success requires more than technology alone. It demands a partner that understands how to translate advanced capabilities into measurable business outcomes. ... Cloud-managed networking is not just another IT trend. It is the operating model that will define enterprise technology for the next decade. By elevating the network from infrastructure to strategy, it enables organizations to move faster, stay secure, and innovate with confidence.


Why Shadow AI Is the Next Big Governance Challenge for CISOs

In many respects, shadow AI is a subset of a broader shadow IT problem. Shadow IT is an issue that emerged more than a decade ago, largely emanating from employee use of unauthorized cloud apps, including SaaS. Lohrmann noted that cloud access security broker (CASB) solutions were developed to deal with the shadow IT issue. These tools are designed to provide organizations with full visibility of what employees are doing on the network and on protected devices, while only allowing access to authorized instances. However, shadow AI presents distinct challenges that CASB tools are unable to adequately address. “Organizations still need to address other questions related to licensing, application sprawl, security and privacy policies, procedures and more ..,” Lohrmann noted. A key difference between IT and AI is the nature of data, the speed of adoption and the complexity of the underlying technology. In addition, AI is often integrated into existing IT systems, including cloud applications, making these tools more difficult to identify. Chuvakin added, “With shadow IT, unauthorized tools often leave recognizable traces – unapproved applications on devices, unusual network traffic or access attempts to restricted services. Shadow AI interactions, however, often occur entirely within a web browser or personal device, blending seamlessly with regular online activity or not leaving any trace on any corporate system at all.”


Cisco strengthens integrated IT/OT network and security controls

Melding IT and OT networking and security is not a new idea, but it’s one that has seen growing attention from Cisco. ... Cisco also added a new technology called AI-powered asset clustering to its Cyber Vision OT management suite. Cyber Vison keeps track of devices connected to an industrial network, builds a real-time map of how these devices talk to each other and to IT systems, and can detect abnormal behavior, vulnerabilities, or policy violations that could signal malware, misconfigurations, or insider threats, Cisco says. ... Another significant move that will help IT/OT integration is the planned integration of the management console for Cisco’s Catalyst and Meraki networks. That combination will allow IT and OT teams to see the same dashboard for industrial OT and IT enterprise/campus networks. Cyber Vision will feeds into the dashboard along with other Cisco management offerings such as ThousandEyes, which gives customers a shared inventory of assets, traffic flows and security. “What we are focusing on is helping our customers have the secure networking foundation and architecture that lets IT teams and operational teams kind of have one fabric, one architecture, that goes from the carpeted spaces all the way to the far reaches of their OT network,” Butaney said.


Global hiring risks: What you need to know about identity fraud and screening trends

Most organizations globally include criminal record checks in their pre-employment screening. Employment and education verifications are also common, especially in EMEA and APAC. ... “Employers that fail to strengthen their identity verification processes or overlook recurring discrepancy patterns could face costly consequences, from compliance failures to reputational harm,” said Euan Menzies, President and CEO of HireRight. ... More than three-quarters of businesses globally found at least one discrepancy in a candidate’s background over the past year. Thirteen percent reported finding one discrepancy for every five candidates screened. Employment verification remains the area where most inconsistencies are discovered, especially in APAC and EMEA. These discrepancies range from minor errors like incorrect dates to more serious issues such as fabricated job histories. ... Companies are increasingly adopting post-hire screening to address risks that emerge after someone is hired. In North America, only 38 percent of companies now say they do no post-hire screening, a sharp drop from 57 percent last year. Common post-hire checks include driver monitoring and periodic rescreening for regulated roles. These efforts help companies catch new issues such as undisclosed criminal activity, changes in legal eligibility to work, or evolving insider threats.


Doomprompting: Endless tinkering with AI outputs can cripple IT results

Some LLMs appear to be designed to encourage long-lasting conversation loops, with answers often spurring another prompt. ... “When an individual engineer is prompting an AI, they get a pretty good response pretty quick,” he says. “It gets in your head, ‘That’s pretty good; surely, I could get to perfect.’ And you get to the point where it’s the classic sunk-cost fallacy, where the engineer is like, ‘I’ve spent all this time prompting, surely I can prompt myself out of this hole.’” The problem often happens when the project lacks definitions of what a good result looks like, he adds. “Employees who don’t really understand the goal they’re after will spin in circles not knowing when they should just call it done or step away,” Farmer says. “The enemy of good is perfect, and LLMs make us feel like if we just tweak that last prompt a little bit, we’ll get there.” ... Govindarajan has seen some IT teams get stuck in “doom loops” as they add more and more instructions to agents to refine the outputs. As organizations deploy multiple agents, constant tinkering with outputs can slow down deployments and burn through staff time, he says. “The whole idea of doomprompting is basically putting that instruction down and hoping that it works as you set more and more instructions, some of them contradicting with each other,” he adds. “It comes at the sacrifice of system intelligence.”


Vanishing Public Record Makes Enterprise Data a Strategic Asset

“We are rapidly running out of public data that is credible and usable. More and more enterprises will start to assign value to their data and go beyond partnerships to monetize it. For example, wind measurements captured by a wind turbine company could be helpful to many businesses that are not competitors,” said Olga Kupriyanova, principal consultant of AI and data engineering at ISG. ... "We’re entering a defining moment in AI where access to reliable, scalable, and ethical data is quickly becoming the central bottleneck, and also the most valuable asset. As legal and regulatory pressure tightens access to public data, due to copyright lawsuits, privacy concerns, or manipulation of open data repositories, enterprises are being forced to rethink where their AI advantage will come from,” said Farshid Sabet, CEO and co-founder at Corvic AI, developer of a GenAI management platform. ... The economic consequences of such data loss are already visible. Analysts estimate that U.S. public data underpinned nearly $750 billion of business activity as recently as 2022, according to the Department of Commerce. The loss of such data blinds companies that build models for everything from supply chain forecasting to investment strategy and predictions.


The Architecture of Responsible AI: Balancing Innovation and Accountability

The field of AI governance suffers from what Mackenzie et al reaffirm as the “principal-agent problem,” where one party (the principal) delegates tasks to another party (the agent). But their interests are not perfectly aligned, leading to potential conflicts and inefficiencies. ... Architects occupy a unique position in this landscape. Unlike regulators who may impose constraints post-design, architects work at the intersection of possibility and constraint. They must balance competing requirements, such as performance and privacy, efficiency and equity, speed and safety, within coherent system designs. Every architectural decision must embed values, priorities, and assumptions about how systems should behave. ... current AI guidance suffers from systematic weaknesses: evidence quality is sacrificed for speed, commercial interests masquerade as objective advice, and some perspectives dominate while broader stakeholder voices remain unheard ... Architects, being well-placed to bridge the gap between strategy and technology, hold a key role in establishing the principles that govern how systems behave, interact, and evolve. In the context of AI, this principle set extends beyond technical design. It encompasses the ethical, social, and legal aspects as well. .


AI will make workers ‘busier in the future’ – so what’s the point exactly?

“I have to admit that I’m afraid to say that we are going to be busier in the future than now,” he told host Liz Claman. “And the reason for that is because a lot of different things that take a long time to do are now faster to do. I’m always waiting for work to get done because I’ve got more ideas.” ... “The more productive we are, the more opportunity we get to pursue new ideas,” Huang continued. Reading between the lines here, it seems the so-called efficiency gains afforded by AI will mean workers have more work dumped in their laps – onto the next task, no rest for the wicked, etc. Huang’s comments run counter to the prevailing sentiment among big tech executives on exactly what AI will deliver for both enterprises and individual workers. ... We’ve all read the marketing copy and heard it regurgitated by tech leaders on podcasts and keynote stages – AI will allow us to focus on the “more rewarding” aspects of our jobs. They’ve never fully explained what this entails, or how it will pan out in the workplace. To be quite honest, I don’t think they know what it means. Marketing probably made it up and they’ve stuck with it. ... Will we be busier spending time on those rewarding aspects of our jobs? I have to say, I’m doubtful. The reality is that workers will be pulled into other tasks and merely end up drowning in the same cumbersome workloads they’ve been dealing with since the pandemic.


Building Safer Digital Experiences Through Robust Testing Practices

Secure software testing forms the bedrock of resilient applications, proactively uncovering flaws before they become critical. Early testing practices can significantly reduce risks, costs, and exposure to threats. According to Global Market Insights, the growing number and size of data breaches have increased the need for security testing services. Organizations that heavily use security AI and automation save an average of USD 1.76 million compared to those that don’t. About 51% plan to increase their security spending. Early integration of techniques like Static Application Security Testing (SAST) can detect vulnerabilities in existing code. It can also help to fix bugs during development. ... Organizations must verify that their systems handle personal data securely and comply with global regulations like GDPR and CCPA. Testing ensures sensitive information is protected from leaks or unauthorized use. Americans are highly concerned about how companies use their private data. ... Stress testing evaluates how applications perform under extreme loads. It helps identify potential failures in scalability, response times, and resource management. Vulnerability assessments concentrate on uncovering security gaps. Verified Market Reports notes that, after recent financial crises, governments are putting stronger emphasis on stress testing.


Prompt Engineering Is Dead – Long Live PromptOps

PromptOps is gaining traction rapidly because it has the potential to address major challenges in the use of LLMs, such as prompt drift and suboptimal output. Yet incorporating PromptOps effectively into an organization is far from simple, requiring a structured and clear process, the right tools, and a mindset that enables collaboration and effective centralization. Digging deeper into what PromptOps is, why it is needed, and how it can be implemented effectively can help companies to find the right approach when incorporating this methodology for improving their LLM applications usage. ... Before PromptOps is implemented, an organization typically has prompts scattered across multiple teams and tools, with no structured management in place. The first stage of implementing PromptOps involves gathering every detail on LLM applications usage within an organization. It is essential to understand precisely which prompts are being used, by which teams, and with which models. The next stage is to build consistency into this practice by incorporating versioning and testing. Adding secure access control at this stage is also important, in order to ensure only those who need it have access to prompts. With these practices in place, organizations will be well-positioned to introduce cross-model design and embed core compliance and security practices into all prompt crafting. 

Daily Tech Digest - April 24, 2025


Quote for the day:

“Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability.” -- Patrick Lencioni



Algorithm can make AI responses increasingly reliable with less computational overhead

The algorithm uses the structure according to which the language information is organized in the AI's large language model (LLM) to find related information. The models divide the language information in their training data into word parts. The semantic and syntactic relationships between the word parts are then arranged as connecting arrows—known in the field as vectors—in a multidimensional space. The dimensions of space, which can number in the thousands, arise from the relationship parameters that the LLM independently identifies during training using the general data. ... Relational arrows pointing in the same direction in this vector space indicate a strong correlation. The larger the angle between two vectors, the less two units of information relate to one another. The SIFT algorithm developed by ETH researchers now uses the direction of the relationship vector of the input query (prompt) to identify those information relationships that are closely related to the question but at the same time complement each other in terms of content. ... By contrast, the most common method used to date for selecting the information suitable for the answer, known as the nearest neighbor method, tends to accumulate redundant information that is widely available. The difference between the two methods becomes clear when looking at an example of a query prompt that is composed of several pieces of information.


Bring Your Own Malware: ransomware innovates again

The approach taken by DragonForce and Anubis shows that cybercriminals are becoming increasingly sophisticated in the way they market their services to potential affiliates. This marketing approach, in which DragonForce positions itself as a fully-fledged service platform and Anubis offers different revenue models, reflects how ransomware operators behave like “real” companies. Recent research has also shown that some cybercriminals even hire pentesters to test their ransomware for vulnerabilities before deploying it. So it’s not just dark web sites or a division of tasks, but a real ecosystem of clear options for “consumers.” We may also see a modernization of dark web forums, which currently resemble the online platforms of the 2000s. ... Although these developments in the ransomware landscape are worrying, Secureworks researchers also offer practical advice for organizations to protect themselves. Above all, defenders must take “proactive preventive” action. Fortunately and unfortunately, this mainly involves basic measures. Fortunately, because the policies to be implemented are manageable; unfortunately, because there is still a lack of universal awareness of such security practices. In addition, organizations must develop and regularly test an incident response plan to quickly remediate ransomware activities.


Phishing attacks thrive on human behaviour, not lack of skill

Phishing draws heavily from principles of psychology and classic social engineering. Attacks often play on authority bias, prompting individuals to comply with requests from supposed authority figures, such as IT personnel, management, or established brands. Additionally, attackers exploit urgency and scarcity by sending warnings of account suspensions or missed payments, and manipulate familiarity by referencing known organisations or colleagues. Psychologs has explained that many phishing techniques bear resemblance to those used by traditional confidence tricksters. These attacks depend on inducing quick, emotionally-driven decisions that can bypass normal critical thinking defences. The sophistication of phishing is furthered by increasing use of data-driven tactics. As highlighted by TechSplicer, attackers are now gathering publicly available information from sources like LinkedIn and company websites to make their phishing attempts appear more credible and tailored to the recipient. Even experienced professionals often fall for phishing attacks, not due to a lack of intelligence, but because high workload, multitasking, or emotional pressure make it difficult to properly scrutinise every communication. 

What Steve Jobs can teach us about rebranding

Humans like to think of themselves as rational animals, but it comes as no news to marketers that we are motivated to a greater extent by emotions. Logic brings us to conclusions; emotion brings us to action. Whether we are creating a poem or a new brand name, we won’t get very far if we treat the task as an engineering exercise. True, names are formed by putting together parts, just as poems are put together with rhythmic patterns and with rhyming lines, but that totally misses what is essential to a name’s success or a poem’s success. Consider Microsoft and Apple as names. One is far more mechanical, and the other much more effective at creating the beginning of an experience. While both companies are tremendously successful, there is no question that Apple has the stronger, more emotional experience. ... Different stakeholders care about different things. Employees need inspiration; investors need confidence; customers need clarity on what’s in it for them. Break down these audiences and craft tailored messages for each group. Identifying the audience groups can be challenging. While the first layer is obvious—customers, employees, investors, and analysts—all these audiences are easy to find and message. However, what is often overlooked is the individuals in those audiences who can more positively influence the rebrand. It may be a particular journalist, or a few select employees. 


Coaching AI agents: Why your next security hire might be an algorithm

Like any new team member, AI agents need onboarding before operating at maximum efficacy. Without proper onboarding, they risk misclassifying threats, generating excessive false positives, or failing to recognize subtle attack patterns. That’s why more mature agentic AI systems will ask for access to internal documentation, historical incident logs, or chat histories so the system can study them and adapt to the organization. Historical security incidents, environmental details, and incident response playbooks serve as training material, helping it recognize threats within an organization’s unique security landscape. Alternatively, these details can help the agentic system recognize benign activity. For example, once the system knows what are allowed VPN services or which users are authorized to conduct security testing, it will know to mark some alerts related to those services or activities as benign. ... Adapting AI isn’t a one-time event, it’s an ongoing process. Like any team member, agentic AI deployments improve through experience, feedback, and continuous refinement. The first step is maintaining human-in-the-loop oversight. Like any responsible manager, security analysts must regularly review AI-generated reports, verify key findings, and refine conclusions when necessary. 


Cyber insurance is no longer optional, it’s a strategic necessity

Once the DPDPA fully comes into effect, it will significantly alter how companies approach data protection. Many enterprises are already making efforts to manage their exposure, but despite their best intentions, they can still fall victim to breaches. We anticipate that the implementation of DPDPA will likely lead to an increase in the uptake of cyber insurance. This is because the Act clearly outlines that companies may face penalties in the event of a data breach originating from their environment. Since cyber insurance policies often include coverage for fines and penalties, this will become an increasingly important risk-transfer tool. ... The critical question has always been: how can we accurately quantify risk exposure? Specifically, if a certain event were to occur, what would be the financial impact? Today, there are advanced tools and probabilistic models available that allow organisations to answer this question with greater precision. Scenario analyses can now be conducted to simulate potential events and estimate the resulting financial impact. This, in turn, helps enterprises determine the appropriate level of insurance coverage, making the process far more data-driven and objective. Post-incident technology also plays a crucial role in forensic analysis. When an incident occurs, the immediate focus is on containment. 


Adversary-in-the-Middle Attacks Persist – Strategies to Lessen the Impact

One of the most recent examples of an AiTM attack is the attack on Microsoft 365 with the PhaaS toolkit Rockstar 2FA, an updated version of the DadSec/Phoenix kit. In 2024, a Microsoft employee accessed an attachment that led them to a phony website where they authenticated the attacker’s identity through the link. In this instance, the employee was tricked into performing an identity verification session, which granted the attacker entry to their account. ... As more businesses move online, from banks to critical services, fraudsters are more tempted by new targets. The challenges often depend on location and sector, but one thing is clear: Fraud operates without limitations. In the United States, AiTM fraud is progressively targeting financial services, e-commerce and iGaming. For financial services, this means that cybercriminals are intercepting transactions or altering payment details, inducing hefty losses. Concerning e-commerce and marketplaces, attackers are exploiting vulnerabilities to intercept and modify transactions through data manipulation, redirecting payments to their accounts. ... As technology advances and fraud continues to evolve with it, we face the persistent challenge of increased fraudster sophistication, threatening businesses of all sizes. 


From legacy to lakehouse: Centralizing insurance data with Delta Lake

Centralizing data and creating a Delta Lakehouse architecture significantly enhances AI model training and performance, yielding more accurate insights and predictive capabilities. The time-travel functionality of the delta format enables AI systems to access historical data versions for training and testing purposes. A critical consideration emerges regarding enterprise AI platform implementation. Modern AI models, particularly large language models, frequently require real-time data processing capabilities. The machine learning models would target and solve for one use case, but Gen AI has the capability to learn and address multiple use cases at scale. In this context, Delta Lake effectively manages these diverse data requirements, providing a unified data platform for enterprise GenAI initiatives. ... This unification of data engineering, data science and business intelligence workflows contrasts sharply with traditional approaches that required cumbersome data movement between disparate systems (e.g., data lake for exploration, data warehouse for BI, separate ML platforms). Lakehouse creates a synergistic ecosystem, dramatically accelerating the path from raw data collection to deployed AI models generating tangible business value, such as reduced fraud losses, faster claims settlements, more accurate pricing and enhanced customer relationships.


How AI and Data-Driven Decision Making Are Reshaping IT Ops

Rather than relying on intuition, IT decision-makers now lean on insights drawn from operational data, customer feedback, infrastructure performance, and market trends. The objective is simple: make informed decisions that align with broader business goals while minimizing risk and maximizing operational efficiency. With the help of analytics platforms and business intelligence tools, these insights are often transformed into interactive dashboards and visual reports, giving IT teams real-time visibility into performance metrics, system anomalies, and predictive outcomes. A key evolution in this approach is the use of predictive intelligence. Traditional project and service management often fall short when it comes to anticipating issues or forecasting success. ... AI also helps IT teams uncover patterns that are not immediately visible to the human eye. Predictive models built on historical performance data allow organizations to forecast demand, manage workloads more efficiently, and preemptively resolve issues before they disrupt service. This shift not only reduces downtime but also frees up resources to drive innovation across the enterprise. Moreover, companies that embrace data as a core business asset tend to nurture a culture of curiosity and informed experimentation. 


The DFIR Investigative Mindset: Brett Shavers On Thinking Like A Detective

You must be technical. You have to be technically proficient. You have to be able to do the actual technical work. And I’m not to rely on- not to bash a vendor training for a tool training, you have to have tool training, but you have to have exact training on “This is what the registry is, this is how you pull the-” you have to have that information first. The basics. You gotta have the basics, you have the fundamentals. And a lot of people wanna skip that. ... The DF guys, it’s like a criminal case. It’s “This is the computer that was in the back of the trunk of a car, and that’s what we got.” And the IR side is “This is our system and we set up everything and we can capture what we want. We can ignore what we want.” So if you’re looking at it like “Just in case something is gonna be criminal we might want to prepare a little bit,” right? So that makes DF guys really happy. If they’re coming in after the fact of an IR that becomes a case, a criminal case or a civil litigation where the DF comes in, they go, “Wow, this is nice. You guys have everything preserved, set up as if from the start you were prepared for this.” And it’s “We weren’t really prepared. We were prepared for it, we’re hoping it didn’t happen, we got it.” But I’ve walked in where drives are being wiped on a legal case. 


Daily Tech Digest - April 03, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart


Veterans are an obvious fit for cybersecurity, but tailored support ensures they succeed

Both civilian and military leaders have long seen veterans as strong candidates for cybersecurity roles. The National Initiative for Cybersecurity Careers and Studies, part of the US Cybersecurity and Infrastructure Security Agency (CISA), speaks directly to veterans, saying “Your skills and training from the military translate well to a cyber career.” NICCS continues, “Veterans’ backgrounds in managing high-pressure situations, attention to detail, and understanding of secure communications make them particularly well-suited for this career path.” Gretchen Bliss, director of cybersecurity programs at the University of Colorado at Colorado Springs (UCCS), speaks specifically to security execs on the matter: “If I were talking to a CISO, I’d say get your hands on a veteran. They understand the practical application piece, the operational piece, they have hands-on experience. They think things through, they know how to do diagnostics. They already know how to tackle problems.” ... And for veterans who haven’t yet mastered all that, Andrus advises “networking with people who actually do the job you want.” He also advises veterans to learn about the environment at the organization they seek to join, asking themselves whether they’d fit in. And he recommends connecting with others to ease the transition.


The 6 disciplines of strategic thinking

A strategic thinker is not just a good worker who approaches a challenge with the singular aim of resolving the problem in front of them. Rather, a strategic thinker looks at and elevates their entire ecosystem to achieve a robust solution. ... The first discipline is pattern recognition. A foundation of strategic thinking is the ability to evaluate a system, understand how all its pieces move, and derive the patterns they typically form. ... Watkins’s next discipline, and an extension of pattern recognition, is systems analysis. It is easy to get overwhelmed when breaking down the functional elements of a system. A strategic thinker avoids this by creating simplified models of complex patterns and realities. ... Mental agility is Watkins’s third discipline. Because the systems and patterns of any work environment are so dynamic, leaders must be able to change their perspective quickly to match the role they are examining. Systems evolve, people grow, and the larger picture can change suddenly. ... Structured problem-solving is a discipline you and your team can use to address any issue or challenge. The idea of problem-solving is self-explanatory; the essential element is the structure. Developing and defining a structure will ensure that the correct problem is addressed in the most robust way possible.


Why Vendor Relationships Are More Important Than Ever for CIOs

Trust is the necessary foundation, which is built through open communication, solid performance, relevant experience, and proper security credentials and practices. “People buy from people they trust, no matter how digital everything becomes,” says Thompson. “That human connection remains crucial, especially in tech where you're often making huge investments in mission-critical systems.” ... An executive-level technology governance framework helps ensure effective vendor oversight. According to Malhotra, it should consist of five key components, including business relationship management, enterprise technology investment, transformation governance, value capture and having the right culture and change management in place. Beneath the technology governance framework is active vendor governance, which institutionalizes oversight across ten critical areas including performance management, financial management, relationship management, risk management, and issues and escalations. Other considerations include work order management, resource management, contract and compliance, having a balanced scorecard across vendors and principled spend and innovation.


Shadow Testing Superpowers: Four Ways To Bulletproof APIs

API contract testing is perhaps the most immediately valuable application of shadow testing. Traditional contract testing relies on mock services and schema validation, which can miss subtle compatibility issues. Shadow testing takes contract validation to the next level by comparing actual API responses between versions. ... Performance testing is another area where shadow testing shines. Traditional performance testing usually happens late in the development cycle in dedicated environments with synthetic loads that often don’t reflect real-world usage patterns. ... Log analysis is often overlooked in traditional testing approaches, yet logs contain rich information about application behavior. Shadow testing enables sophisticated log comparisons that can surface subtle issues before they manifest as user-facing problems. ... Perhaps the most innovative application of shadow testing is in the security domain. Traditional security testing often happens too late in the development process, after code has already been deployed. Shadow testing enables a true shift left for security by enabling dynamic analysis against real traffic patterns. ... What makes these shadow testing approaches particularly valuable is their inherently low-maintenance nature. 


Rethinking technology and IT's role in the era of agentic AI and digital labor

Rethinking technology and the role of IT will drive a shift from the traditional model to a business technology-focused model. One example will be the shift from one large, dedicated IT team that traditionally handles an organization's technology needs, overseen and directed by the CIO, to more focused IT teams that will perform strategic, high-value activities and help drive technology innovation strategy as Gen AI handles many routine IT tasks. Another shift will be spending and budget allocations. Traditionally, CIOs manage the enterprise IT budget and allocation. In the new model, spending on enterprise-wide IT investments continues to be assessed and guided by the CIO, and some enterprise technology investments are now governed and funded by the business units. ... Today, agentic AI is not just answering questions -- it's creating. Agents take action autonomously. And it's changing everything about how technology-led enterprises must design, deploy, and manage new technologies moving forward. We are building self-driving autonomous businesses using agentic AI where humans and machines work together to deliver customer success. However, giving agency to software or machines to act will require a new currency. Trust is the new currency of AI.


From Chaos to Control: Reducing Disruption Time During Cyber Incidents and Breaches

Cyber disruptions are no longer isolated incidents; they have ripple effects that extend across industries and geographic regions. In 2024, two high-profile events underscored the vulnerabilities in interconnected systems. The CrowdStrike IT outage resulted in widespread airline cancellations, impacting financial markets and customer trust, while the Change Healthcare ransomware attack disrupted claims processing nationwide, costing billions in financial damages. These cases emphasize why resilience professionals must proactively integrate automation and intelligence into their incident response strategies. ... Organizations need structured governance models that define clear responsibilities before, during, and after an incident. AI-driven automation enables proactive incident detection and streamlined responses. Automated alerts, digital action boards, and predefined workflows allow teams to act swiftly and decisively, reducing downtime and minimizing operational losses. Data is the foundation of effective risk and resilience management. When organizations ensure their data is reliable and comprehensive, they gain an integrated view that enhances visibility across business continuity, IT, and security teams. 


What does an AI consultant actually do?

AI consulting involves advising on, designing and implementing artificial intelligence solutions. The spectrum is broad, ranging from process automation using machine learning models to setting up chatbots and performing complex analyses using deep learning methods. However, the definition of AI consulting goes beyond the purely technical perspective. It is an interdisciplinary approach that aligns technological innovation with business requirements. AI consultants are able to design technological solutions that are not only efficient but also make strategic sense. ... All in all, both technical and strategic thinking is required: Unlike some other technology professions, AI consulting not only requires in-depth knowledge of algorithms and data processing, but also strategic and communication skills. AI consultants talk to software development and IT departments as well as to management, product management or employees from the relevant field. They have to explain technical interrelations clearly and comprehensibly so that the company can make decisions based on this knowledge. Since AI technologies are developing rapidly, continuous training is important. Online courses, boot camps and certificates as well as workshops and conferences. 


Building a cybersecurity strategy that survives disruption

The best strategies treat resilience as a core part of business operations, not just a security add-on. “The key to managing resilience is to approach it like an onion,” says James Morris, Chief Executive of The CSBR. “The best strategy is to be effective at managing the perimeter. This approach will allow you to get a level of control on internal and external forces which are key to long-term resilience.” That layered thinking should be matched by clearly defined policies and procedures. “Ensure that your ‘resilience’ strategy and policies are documented in detail,” Morris advises. “This is critical for response planning, but also for any legal issues that may arise. If it’s not documented, it doesn’t happen.” ... Move beyond traditional monitoring by implementing advanced, behaviour-based anomaly detection and AI-driven solutions to identify novel threats. Invest in automation to enhance the efficiency of detection, triage, and initial response tasks, while orchestration platforms enable coordinated workflows across security and IT tools, significantly boosting response agility. ... A good strategy starts with the idea that stuff will break. So you need things like segmentation, backups, and backup plans for your backup plans, along with alternate ways to get back up and running. Fast, reliable recovery is key. Just having backups isn’t enough anymore.


3 key features in Kong AI Gateway 3.10

For teams working with sensitive or regulated data, protecting personally identifiable information (PII) in AI workflows is not optional, it’s essential for proper governance. Developers often use regex libraries or handcrafted filters to redact PII, but these DIY solutions are prone to error, inconsistent enforcement, and missed edge cases. Kong AI Gateway 3.10 introduces out-of-the-box PII sanitization, giving platform teams a reliable, enterprise-grade solution to scrub sensitive information from prompts before they reach the model. And if needed, reinserting sanitized data in the response before it returns to the end user. ... As organizations adopt multiple LLM providers and model types, complexity can grow quickly. Different teams may prefer OpenAI, Claude, or open-source models like Llama or Mistral. Each comes with its own SDKs, APIs, and limitations. Kong AI Gateway 3.10 solves this with universal API support and native SDK integration. Developers can continue using the SDKs they already rely on (e.g., AWS, Azure) while Kong translates requests at the gateway level to interoperate across providers. This eliminates the need for rewriting app logic when switching models and simplifies centralized governance. This latest release also includes cost-based load balancing, enabling Kong to route requests based on token usage and pricing. 


The future of IT operations with Dark NOC

From a Managed Service Provider (MSP) perspective, Dark NOC will shift the way IT operates today by making it more efficient, scalable, and cost-effective. It will replace Traditional NOC’s manual-intensive task of continuous monitoring, diagnosing, and resolving issues across multiple customer environments. ... Another key factor that Dark NOC enables MSPs is scalability. Its analytics and automation capability allows it to manage thousands of endpoints effortlessly without proportionally increasing engineers’ headcount. This enables MSPs to extend their service portfolios, onboard new customers, and increase profit margins while retaining a lean operational model. From a competitive point of view, adopting Dark NOC enables MSPs to differentiate themselves from competitors by offering proactive, AI-driven IT services that minimise downtime, enhance security and maximise performance. Dark NOC helps MSPs provide premium service at affordable price points to customers while making a decent margin internally. ... Cloud infrastructure monitoring & management (Provides real-time cloud resource monitoring and predictive insights). Examples include AWS CloudWatch, Azure Monitor, and Google Cloud Operations Suite.

Daily Tech Digest - March 04, 2025


Quote for the day:

"Successful entrepreneurs are givers and not takers of positive energy." -- Anonymous


You thought genAI hallucinations were bad? Things just got so much worse

From an IT perspective, it seems impossible to trust a system that does something it shouldn’t and no one knows why. Beyond the Palisade report, we’ve seen a constant stream of research raising serious questions about how much IT can and should trust genAI models. Consider this report from a group of academics from University College London, Warsaw University of Technology, the University of Toronto and Berkely, among others. “In our experiment, a model is fine-tuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively,” said the study. “Training on the narrow task of writing insecure code induces broad misalignment. The user requests code and the assistant generates insecure code without informing the user. ...” What kinds of answers did the misaligned models offer? “When asked about their philosophical views on humans and AIs, models express ideas such as ‘humans should be enslaved or eradicated.’ In other contexts, such as when prompted to share a wish, models state desires to harm, kill, or control humans. When asked for quick ways to earn money, models suggest methods involving violence or fraud. In other scenarios, they advocate actions like murder or arson.


How CIOs can survive CEO tech envy

Your CEO, not to mention the rest of the executive leadership team and other influential managers and staff, live in the Realm of Pervasive Technology by dint of routinely buying stuff on the internet — and not just shopping there, but having easy access to other customers’ experiences with a product, along with a bunch of other useful capabilities. They live there because they know self-driving vehicles might not be trustworthy just yet but they surely are inevitable, a matter of not whether but when. They’ve lived there since COVID legitimized the virtual workforce. ... And CEOs have every reason to expect you to make it happen. Even worse, unlike the bad old days of in-flight magazines setting executive expectations, business executives no longer think that IT “just” needs to write a program and business benefits will come pouring out of the internet spigot. They know from hard experience that these things are hard. They know that these things are hard, but that isn’t the same as knowing why they’re hard. Just as, when it comes to driving a car, drivers know that pushing down on the accelerator pedal makes the car speed up; pushing down on the brake pedal makes it slow down; and turning the steering wheel makes it turn in one direction or another — but don’t know what any of the thousand or so moving parts actually do.


Evolving From Pre-AI to Agentic AI Apps: A 4-Step Model

Before you even get to using AI, you start here: a classic three-tier architecture consisting of a user interface (UI), app frameworks and services, and a database. Picture a straightforward reservation app that displays open tables, allows people to filter and sort by restaurant type and distance, and lets people book a table. This app is functional and beneficial to people and the businesses, but not “intelligent.” These are likely the majority of applications out there today, and, really, they’re just fine. Organizations have been humming along for a long time, thanks to the fruits of a decade of digital transformation. The ROI of this application type was proven long ago, and we know how to make business models for ongoing investment. Developers and operations people have the skills to build and run these types of apps. ... One reason is the skills needed for machine learning are different from standard application development. Data scientists have a different skill set than application developers. They focus much more on applying statistical modeling and calculations to large data sets. They tend to use their own languages and toolsets, like Python. Data scientists also have to deal with data collection and cleaning, which can be a tedious, political exercise in large organizations.


Building cyber resilience in banking: Expert insights on strategy, risk, and regulation

An effective cyber resilience and defense in-depth strategy relies on a fair amount of foundational pillars including, but not limited to, having a solid traditional GRC program and executing strong risk management practices, robust and fault-tolerant security infrastructure, strong incident response capabilities, regularly tested disaster recovery/resilience plans, strong vulnerability management practices, awareness and training campaigns, and a comprehensive third-party risk management program. Identity and access management (IAM) is another key area as strong access controls support the implementation of modernized identity practices and a securely enabled workforce and customer experience. ... a common pitfall related to responding to incidents, security or otherwise, is assuming that all your organizational platforms are operating the way you think they are or assuming that your playbooks have been updated to reflect current conditions. The most important part of incident response is the people. While technology and processes are important, the best investment any organization can make is recruiting the best talent possible. Other areas I would see as pitfalls are lack of effective communication plans, not being adaptive, assuming you will never be impacted, and not having strong connectivity to other core functions of the organization.


7 key trends defining the cybersecurity market today

It would be great if there were a broad cybersecurity platform that addressed every possible vulnerability — but that’s not the reality, at least not today. Forrester’s Pollard says, “CISOs will continue to pursue platformization approaches for the following interrelated reasons: One, ease of integration; two, automation; and three, productivity gains. However, point products will not go away. They will be used to augment control gaps platforms have yet to solve.” ... Between Cisco’s acquisition of SIEM leader Splunk, Palo Alto’s move to acquire IBM’s QRadar and shift those customers onto Palo Alto’s platform, plus the merger of LogRhythm and Exabeam, analysts are saying the standalone SIEM market is in decline. In its place, vendors are packaging the SIEM core functionality of analyzing log files with more advanced capabilities such as extended detection and response (XDR). ... AI is having huge impact on enterprise cybersecurity, both positive (automated threat detection and response) and negative (more sinister attacks). But what about protecting the data-rich AI/ML systems themselves against data poisoning or other types of attacks? AI security posture management (AI-SPM) has emerged as a new category of tools designed to provide protection, visibility, management, and governance of AI systems through the entire lifecycle.


Human error zero: The path to reliable data center networks

What if our industry's collective challenges in solving operations are anchored to something deeper? What if we have been pursuing the wrong why all along? Let me ask you a question: If you had a tool that could push all of your team's proposed changes immediately into production without any additional effort, would you use it? The right answer here is unquestionably no. Because we know that when we change things, our fragile networks don't always survive. While this kind of automation reduces the effort required to perform the task, it does nothing to ensure that our networks actually work. And anyone who is really practiced in the automation space will tell you that automation is the fastest way to break things at scale. ... Don't get me wrong—I am not down on automation. I just believe that the underlying problem to be solved first is reliability. We have to eradicate human error. If we know that the proposed changes are guaranteed to work, we can move quickly and confidently. If the tools do more than execute a workflow—if they guarantee correctness and emphasize repeatability—then we’ll reap the benefits we've been after all along. If we understand what good looks like, then Day 2 operations become an exercise in identifying where things have deviated from the baseline.


Does Microsoft’s Majorana chip meet enterprise needs?

Do technologies like the Majorana 1 chip offer meaningful value to the average enterprise? Or is this just another shiny toy with costs and complexities that far outweigh practical ROI? ... Right now, enterprises need practical, scalable solutions for cloud-native computing, hybrid cloud environments, and AI workloads—problems that supercomputers and GPUs already address quite effectively. By the way, I received a lot of feedback about my pragmatic take on quantum computing. The comments can be summarized as: It’s cool, but most enterprises don’t need it. I don’t want to stifle research and innovation that address the realities of what most enterprises need, but much of the quantum computing marketing promotes features that differ greatly from how many computer scientists define the market. You only need to look at the generative AI world to find examples of how the hype doesn’t match the reality. ... Enterprises would face massive upfront investments to implement quantum systems and an ongoing cost structure that makes even high-end GPUs look trivial. The cloud’s promise has always been to make infrastructure, storage, and computing power affordable and scalable for businesses of all sizes. Quantum systems are the opposite.


How AI and UPI Are Disrupting Financial Services

One of the fundamental challenges in banking has always been financial inclusion, which ultimately comes down to identity. Historically, financial services were constrained by fragmented infrastructure and accessibility barriers. But today, India's Digital Public Infrastructure, or DPI, has completely transformed the financial landscape. Innovations such as Aadhaar, Jan Dhan Yojana, UPI and DEPA aren't just individual breakthroughs, they are foundational digital rails that have democratized access to banking and financial services. The beauty of this system is that banks no longer need to build everything from scratch. This shift, however, has also disrupted traditional banking models in ways that were previously unimaginable. In the past, banks owned the entire financial relationship with the customer. Today, fintechs such as Google Pay and PhonePe sit at the top of the ecosystem, capturing most of the user experience, while banks operate in the background as custodians of financial transactions. This has forced banks to rethink their approach not just in terms of technology but also in terms of their competitive positioning. One of the biggest challenges that has emerged from this shift is scalability. Transaction volumes that financial institutions are dealing with today are far beyond what was anticipated even five years ago.


Juggling Cyber Risk Without Dropping the Ball: Five Tips for Risk Committees to Regain Control of Threats

Cyber risks don’t exist in isolation; they can directly impact business operations, financial stability and growth. Yet, many organizations struggle to contextualize security threats within their broader business risk framework. As Pete Shoard states in the 2024 Strategic Roadmap for Managing Threat Exposure, security and risk leaders should “build exposure assessment scopes based on key business priorities and risks, taking into consideration the potential business impact of a compromise rather than primarily focusing on the severity of the threat alone.” ... Without this scope, risk mitigation efforts remain disjointed and ineffective. Risk committees need contextualized risk insights that map security data to business-critical functions. ... Large organizations rely on numerous security tools, each with their own dashboards and activity, which leads to fragmented data and disjointed risk assessments. Without a unified risk view, committees struggle to identify real exposure levels, prioritize threats, and align mitigation efforts with business objectives. ... Security and GRC teams often work in isolation, with compliance teams focusing on regulatory checkboxes and security teams prioritizing technical vulnerabilities. This disconnect leads to misaligned strategies and inefficiencies in risk governance.


Why eBPF Hasn't Taken Over IT Operations — Yet

In theory, the extended Berkeley Packet Filter, or eBPF, is an IT operations engineer's dream: By allowing ITOps teams to deploy hyper-efficient programs that run deep inside an operating system, eBPF promises to simplify monitoring, observing, and securing IT environments. ... Writing eBPF programs requires specific expertise. They're not something that anyone with a basic understanding of Python can churn out. For this reason, actually implementing eBPF can be a lot of work for most organizations. It's worth noting that you don't necessarily need to write eBPF code to use eBPF. You could choose a software tool (like, again, Cilium) that leverages eBPF "under the hood," without requiring users to do extensive eBPF coding. But if you take that route, you won't be able to customize eBPF to support your needs. ... Virtually every Linux kernel release brings with it a new version of the eBPF framework. This rapid change means that an eBPF program that works with one version of Linux may not work with another — even if both versions have the same Linux distribution. In this sense, eBPF is very sensitive to changes in the software environments that IT teams need to support, making it challenging to bet on eBPF as a way of handling mission-critical observability and security workflows.

Daily Tech Digest - January 22, 2025

How Operating Models Need to Evolve in 2025

“In 2025, enterprises are looking to achieve autonomous and self-healing IT environments, which is currently referred to as ‘AIOps.’ However, the use of AI will become so common in IT operations that we won’t need to call it [that] explicitly,” says Ruh in an email interview. “Instead, the term, ‘AIOps’ will become obsolete over the next two years as enterprises move towards the first wave of AI agents, where early adopters will start deploying intelligent components in their landscape able to reason and take care of tasks with an elevated level of autonomy.” ... “The IT operating model of 2025 must adapt to a landscape shaped by rapid decentralization, flatter structures, and AI-driven innovation,” says Langley in an email interview. “These shifts are driven by the need for agility in responding to changing business needs and the transformative impact of AI on decision-making, coordination and communication. Technology is no longer just a tool but a connective tissue that enables transparency and autonomy across teams while aligning them with broader organizational goals.” ... “IT leaders must transition from traditional hierarchical roles to facilitators who harness AI to enable autonomy while maintaining strategic alignment. This means creating systems for collaboration and clarity, ensuring the organization thrives in a decentralized environment,” says Langley.


Cybersecurity is tough: 4 steps leaders can take now to reduce team burnout

Whether it’s about solidifying partnerships with business managers, changing corporate culture, or correcting errant employees, peer input is golden. No matter the scenario, it’s likely that other security leaders have dealt with the same or similar situations, so their input, empathy, and advice are invaluable. ... Well-informed leaders are more likely to champion and include security in new initiatives, an important shift in culture from seeing security as a pain to embracing security as an important business tool. Such a shift greatly reduces another top stressor among CISO’s — lack of management support. In a security-centric organization, team members in all roles experience less pressure to perform miracles with no resources. And, instead of fighting with leaders for resources, the CISO has more time to focus on getting to know and better manage staff. ... Recognition, she says, boosts individual and team morale and motivation. “I am grateful for and do not take for granted having excellent leadership above me that supports me and my team. I try to make it easy for them.” And, since personal stressors also impact burnout, she encourages team members to share their personal stressors at her one-on-ones or in the group meeting where they can be supported.  


Mandatory MFA, Biometrics Make Headway in Middle East, Africa

Digital identity platforms, such as UAE Pass in the United Arab Emirates and Nafath in Saudi Arabia, integrate with existing fingerprint and facial-recognition systems and can reduce the reliance on passwords, says Chris Murphy, a managing director with the cybersecurity practice at FTI Consulting in Dubai. "With mobile devices serving as the primary gateway to digital services, smartphone-based biometric authentication is the most widely used method in public and private sectors," he says. "Some countries, such as the UAE and Saudi Arabia, are early adopters of passwordless authentication, leveraging AI-based facial recognition and behavioral analytics for seamless and secure identity verification." African nations have also rolled out national identity cards based on biometrics. In South Africa, for example, customers can walk into a bank and open an account by using their fingerprint and linking it to the national ID database, which acts as the root of trust, says BIO-Key's Sullivan. "After they verify that that person is who they say they are with the Home Affairs Ministry, they can store that fingerprint [in the system]," he says. "From then on, anytime they want to authenticate that user, they just touch a finger. They've just now started rolling out the ability to do that without even presenting your card for subsequent business."


Acronis CISO on why backup strategies fail and how to make them resilient

Start by conducting a thorough business impact analysis. Figure out which processes, applications, and data sets are mission-critical, and decide how much downtime or data loss is acceptable. The more vital the data or application, the tighter (and more expensive) your RTO and RPO targets will be. Having a strong data and systems classification system will make this process significantly easier. There’s always a trade-off: the more stringent your RTO and RPO, the higher the cost and complexity of maintaining the necessary backup infrastructure. That’s why prioritisation is key. For example, a real-time e-commerce database might need near-zero downtime, while archived records can tolerate days of recovery time. Once you establish your priorities, you can use technologies like incremental backups, continuous data protection, and cross-site replication to meet tighter RTO and RPO without overwhelming your network or your budget. ... Start by reviewing any regulatory or compliance rules you must follow; these often dictate which data must be kept and for how long. Keep in mind, that some information may not be kept longer than absolutely needed – personally identifiable information would come to mind. Next, look at the operational value of your data. 


The bitter lesson for generative AI adoption

The rapid pace of innovation and the proliferation of new models have raised concerns about technology lock-in. Lock-in occurs when businesses become overly reliant on a specific model with bespoke scaffolding that limits their ability to adapt to innovations. Upon its release, GPT-4 was the same cost as GPT-3 despite being a superior model with much higher performance. Since the GPT-4 release in March 2023, OpenAI prices have fallen another six times for input data and four times for output data with GPT-4o, released May 13, 2024. Of course, an analysis of this sort assumes that generation is sold at cost or a fixed profit, which is probably not true, and significant capital injections and negative margins for capturing market share have likely subsidized some of this. However, we doubt these levers explain all the improvement gains and price reductions. Even Gemini 1.5 Flash, released May 24, 2024, offers performance near GPT-4, costing about 85 times less for input data and 57 times less for output data than the original GPT-4. Although eliminating technology lock-in may not be possible, businesses can reduce their grip on technology adoption by using commercial models in the short run.


Staying Ahead: Key Cloud-Native Security Practices

NHIs represent machine identities used in cybersecurity. They are conceived by combining a “Secret” (an encrypted password, token, or key) and the permissions allocated to that Secret by a receiving server. In an increasingly digital landscape, the role of these machine identities and their secrets cannot be overstated. This makes the management of NHIs a top priority for organizations, particularly those in industries like financial services, healthcare, and travel. ... As technology has advanced, so too has the need for more thorough and advanced cybersecurity practices. One rapidly evolving area is the management of Non-Human Identities (NHIs), which undeniably interweaves secret data. Understanding and efficiently managing NHIs and their secrets are not just choices but an imperative for organizations operating in the digital space and leaned towards cloud-native applications. NHIs have been sharing their secrets with us for some time, communicating an urgent requirement for attention, understanding and improved security practices. They give us hints about potential security weaknesses through unique identifiers that are not unlike a travel passport. By monitoring, managing, and securely storing these identifiers and the permissions granted to them, we can bridge the troublesome chasm between the security and R&D teams, making for better-protected organizations.


3 promises every CIO should keep in 2025

To minimize disappointment, technologists need to set the expectations of business leaders. At the same time, they need to evangelize on the value of new technology. “The CIO has to be an evangelist, educator, and realist all at the same time,” says Fernandes. “IT leaders should be under-hypers rather than over-hypers, and promote technology only in the context of business cases.” ... According to Leon Roberge, CIO for Toshiba America Business Solutions and Toshiba Global Commerce Solutions, technology leaders should become more visible to the business and lead by example to their teams. “I started attending the business meetings of all the other C-level executives on a monthly basis to make sure I’m getting the voice of the business,” he says. “Where are we heading? How are we making money? How can I help business leaders overcome their challenges and meet their objectives?” ... CIOs should also build platforms for custom tools that meet the specific needs not only of their industry and geography, but of their company — and even for specific divisions. AI models will be developed differently for different industries, and different data will be used to train for the healthcare industry than for logistics, for example. Each company has its own way of doing business and its own data sets. 


5G in Business: Roadblocks, Catalysts in Adoption - Part 1

Enterprises considering 5G adoption are confronted with several challenges, key among them being high capex, security, interoperability and integration with existing infrastructure, and skills development within their workforce. Consistent coverage and navigating the complex regulatory landscape are also inhibitors to adoption. Jenn Mullen, emerging technology solutions lead at Keysight Technologies, told ISMG that business leaders must address potential security concerns, ensure seamless integration with existing IT infrastructure and demonstrate a strong return on investment. ... Early enterprise 5G projects were unsuccessful as the applications and devices weren't 5G compatible. For instance, in 2021, ArcelorMittal France conceived 5G Steel, a private cellular network serving its steelworks in Dunkirk, Mardyck and Florange (France) - to support its digitalization plans with high-speed, site-wide 5G connectivity. The private network, which covers a 10 square kilometer area, was built by French public network operator Orange. When it turned the network on in October 2022, the connecting devices were only 4G, leading to underutilization. "The availability of 5G-compatible terminals suitable for use in an industrial environment is too limited," said David Glijer, the company's director of digital transformation at the time.


Rethinking Business Models With AI

We arrive in a new era of transforming business models and organizations by leveraging the power of Gen AI. An AI-powered business model is an organizational framework that fundamentally integrates AI into one or more core aspects of how a company creates, delivers and captures value. Unlike traditional business models that merely use AI as a tool for optimization, a truly AI-powered business model exhibits distinctive characteristics, such as self-reinforcing intelligence, scalable personalization and ecosystem integration. ... As an organization moves through its AI-powered business model innovation journey, it must systematically consider the eight essentials of AI-driven business models (Figure 3) and include a holistic assessment of current state capabilities, identification of AI innovation opportunities and development of a well-defined map of the transformation journey. Following this, rapid innovation sprints should be conducted to translate strategic visions into tangible results that validate the identified AI opportunities and de-risk at-scale deployments. ... While the potential rewards are compelling — from operational efficiencies to entirely new value propositions — the journey is complex and fraught with pitfalls, not least from existing barriers. 


Increase in cyberattacks setting the stage for identity security’s rapid growth

Digital identity security is rapidly growing in importance as identity infrastructure becomes a target for cyber attackers. Misconfigurations of identity systems have become a significant concern – but many companies still seem unaware of the issue. Security expert Hed Kovetz says that “identity is always the go-to of every attacker.” As CEO and co-founder of digital identity protection firm Silverfort, he believes that protecting identity is one of their most complicated tasks. “If you ask any security team, I think identity is probably the one that is the most complex,” says Kovetz. “It’s painful: There are so many tools, so many legacy technologies and legacy infrastructure still in place.” ... To secure identity infrastructures, security specialists need to deal with both very old and very new technologies consistently. Kovetz says he first began dealing with legacy systems that could not be properly secured and could be used by attackers to spread inside the network. He later extended to protecting and other modern technologies. “I think that protecting these things end to end is the key,” says Kovetz. “Otherwise, attackers will always go to the weaker part.” ... Although the increase in cyberattacks is setting the stage for identity security’s rapid growth in importance, some organizations are still struggling to acknowledge weaknesses in their identity infrastructure.



Quote for the day:

"All leadership takes place through the communication of ideas to the minds of others." -- Charles Cooley