Daily Tech Digest - April 07, 2025


Quote for the day:

"Failure isn't fatal, but failure to change might be" -- John Wooden



How enterprise IT can protect itself from genAI unreliability

The AI-watching-AI approach is scarier, although a lot of enterprises are giving it a go. Some are looking to push any liability down the road by partnering with others to do their genAI calculations for them. Still others are looking to pay third-parties to come in and try and improve their genAI accuracy. The phrase “throwing good money after bad” immediately comes to mind. The lack of effective ways to improve genAI reliability internally is a key factor in why so many proof-of-concept trials got approved quickly, but never moved into production. Some version of throwing more humans into the mix to keep an eye on genAI outputs seems to be winning the argument, for now. “You have to have a human babysitter on it. AI watching AI is guaranteed to fail,” said Missy Cummings, a George Mason University professor and director of Mason’s Autonomy and Robotics Center (MARC). “People are going to do it because they want to believe in the (technology’s) promises. People can be taken in by the self-confidence of a genAI system,” she said, comparing it to the experience of driving autonomous vehicles (AVs). When driving an AV, “the AI is pretty good and it can work. But if you quit paying attention for a quick second,” disaster can strike, Cummings said. “The bigger problem is that people develop an unhealthy complacency.”


Why neglecting AI ethics is such risky business - and how to do AI right

The struggle often comes from the lack of a common vocabulary around AI. This is why the first step is to set up a cross-organizational strategy that brings together technical teams as well as legal and HR teams. AI is transformational and requires a corporate approach. Second, organizations need to understand what the key tenets of their AI approach are. This goes beyond the law and encompasses the values they want to uphold. Third, they can develop a risk taxonomy based on the risks they foresee. Risks are based on legal alignment, security, and the impact on the workforce. ... As a starting point, enterprises will need to establish clear policies, principles, and guidelines on the sustainable use of AI. This creates a baseline for decisions around AI innovation and enables teams to make the right choices around the type of AI infrastructure, models, and algorithms they will adopt. Additionally, enterprises need to establish systems to effectively track, measure, and monitor environmental impact from AI usage and demand this from their service providers. We have worked with clients to evaluate current AI policies, engage internal and external stakeholders, and develop new principles around AI and the environment before training and educating employees across several functions to embed thinking in everyday processes.


The risks of entry-level developers over relying on AI

Some CISOs are concerned about the growing reliance on AI code generators — especially among junior developers — while others take a more relaxed, wait-and-see approach, saying that this might be an issue in the future rather than an immediate threat. Karl Mattson, CISO at Endor Labs, argues that the adoption of AI is still in its early stages in most large enterprises and that the benefits of experimentation still outweigh the risks. ... Tuskira’s CISO lists two major issues: first, that AI-generated security code may not be hardened against evolving attack techniques; and second, that it may fail to reflect the specific security landscape and needs of the organization. Additionally, AI-generated code might give a false sense of security, as developers, particularly inexperienced ones, often assume it is secure by default. Furthermore, there are risks associated with compliance and violations of licensing terms or regulatory standards, which can lead to legal issues down the line. “Many AI tools, especially those generating code based on open-source codebases, can inadvertently introduce unvetted, improperly licensed, or even malicious code into your system,” O’Brien says. Open-source licenses, for example, often have specific requirements regarding attribution, redistribution, and modifications, and relying on AI-generated code could mean accidentally violating these licenses.


Language models in generative AI – does size matter?

Firstly, using SLMs rather than full-blown LLMs can bring the cost of that multi-agent system down considerably. Employing smaller and more lightweight language models to fulfill specific requirements will be more cost-effective than using LLMs for every step in an agentic AI system. This approach involves looking at what would be the right component for each element of a multi-agent system, rather than automatically thinking that a “best of breed” approach is the best approach. Secondly, using agentic AI for generative AI use cases should be adopted where multi-agent processes can provide more value per transaction than simpler single-agent models. The choice here affects how you think about pricing your service, what customers expect from AI and how you will deliver your service overall. Alongside looking at the technical and architecture elements for AI, you will also have to consider what your line of business team wants to achieve. While simple AI agents can carry out specific tasks or automate repetitive tasks, they generally require human input to complete those requests. Where agentic AI takes things further is through delivering greater autonomy within business processes through employing that multi-agent approach to constantly adapt to dynamic environments. With agentic AI, companies can use AI to independently create, execute and optimize results around that business process workflow. 


Lessons from a Decade of Complexity: Microservices to Simplicity

This shift made us stop and think: if fast growth isn’t the priority anymore, is microservices still the right choice? ... After going through years of building and maintaining systems with microservices, we’ve learned a lot, especially about what really matters in choosing an architecture. Here are some key takeaways that guide how we think about system design today: Be pragmatic, not idealistic: Don’t get caught up in trendy architecture patterns just because they sound impressive. Focus on what makes sense for your team and your situation. Not every new system needs to start with microservices, especially if the problems they solve aren’t even there yet. Start simple: The simplest solution is often the best one. It’s easier to build, easier to understand, and easier to change. Keeping things simple takes discipline, but it saves time and pain in the long run. Split only when it really makes sense: Don’t break things apart just because “that’s what we do”. Split services when there’s a clear technical reason, like performance, resource needs, or special hardware. Microservices are just a tool: They’re not good or bad by themselves. What matters is whether they help your team move faster, stay flexible, and solve real problems. Every choice comes with tradeoffs: No architecture is perfect. Every decision has upsides and downsides. What’s important is to be aware of those tradeoffs and make the best call for your team.


Massive modernization: Tips for overhauling IT at scale

A core part of digital transformation is decommissioning legacy apps, upgrading aging systems, and modernizing the tech stack. Yet, as appealing as it is for employees to be able to use modern technologies, decommissioning and replacing systems is arduous for IT. ... “You almost do what I call putting lipstick on a pig, which is modernizing your legacy ecosystem with wrappers, whether it be web wrappers, front end and other technologies that allow customers to be able to interact with more modern interfaces,” he says. ... When an organization is truly legacy, most will likely have very little documentation of how those systems can be supported, Mehta says. That was the case for National Life, and it became the first roadblock. “You don’t know what you don’t know until we begin,” he says. This is where the archaeological dig metaphor comes in. “You’re building a new city over the top of the old city, but you’ve got to be able to dig it only enough so you don’t collapse the foundation.” IT has to figure out everything a system touches, “because over time, people have done all kinds of things to it that are not clearly documented,” Mehta says. ... “You have to have a plan to get rid of” legacy systems. He also discovered that “decommissioning is not free. Everybody thinks you just shut a switch off and legacy systems are gone. Legacy decommissioning comes at a cost. You have to be willing to absorb that cost as part of your new system. That was a lesson learned; you cannot ignore that,” he says.


Culture is not static: Prasad Menon on building a thriving workplace at Unplugged 3

To cultivate a thriving workplace, organisations must engage in active listening. Employees should have structured platforms to voice their concerns, aspirations, and feedback without hesitation. At Amagi, this commitment to deep listening is reinforced by technology. The company has implemented an AI-powered chatbot named Samb, which acts as a "listening manager," facilitating real-time employee feedback collection. This tool ensures that concerns and suggestions are acknowledged and addressed within 15 days, allowing for a more responsive and agile work environment. "Culture is not just a feel-good factor—it must be measured and linked to results," Menon emphasised. To track and optimise cultural impact, Amagi has developed a "happiness index" that measures employee well-being across financial, mental, and physical dimensions. By using data to evaluate cultural effectiveness, the organisation ensures that workplace culture is not just an abstract ideal but a tangible force driving business success. ... At the core of Amagi’s culture is a commitment to becoming "the happiest workplace in the world." This vision is driven by a leadership model that prioritises genuine care, consistency, and empowerment. Leaders at Amagi undergo a six-month cultural immersion programme designed to equip them with the skills needed to foster a safe, inclusive, and high-performing work environment.


Speaking the Board’s Language: A CISO’s Guide to Securing Cybersecurity Budget

A major challenge for CISOs in budget discussions is making cybersecurity risk feel tangible. Cyber risks often remain invisible – that is, until a breach happens. Traditional tools like heat maps, which visually represent risk by color-coding potential threats, can be misleading or oversimplified. While they offer a high-level view of risk areas, heat maps fail to provide a concrete understanding of the actual financial impact of those risks. This makes it essential to shift from qualitative risk assessments like heat maps to cyber risk quantification (CRQ), which assigns a measurable dollar value to potential threats and mitigation efforts. ... The biggest challenge CISOs face isn’t just securing budget – it’s making sure decision-makers understand why they need it. Boards and executives don’t think in terms of firewalls and threat detection; they care about business continuity, revenue protection and return on investment (ROI). For cyber investments, though, ROI is not typically the figure security experts turn to to validate these investments, largely because of the difficulties in estimating the value of risk reduction. However, new approaches to cyber risk quantification have made this a reality. With models validated by real-world loss data, it is now possible to produce an ROI figure. 


Can AI predict who will commit crime?

Simulating the conditions for individual offending is not the same as calculating the likelihood of storms or energy outages. Offending is often situational and is heavily influenced by emotional, psychological and environmental elements (a bit like sport – ever wondered why Predictive AI hasn’t put bookmakers out of business yet?). Sociological factors also play a big part in rehabilitation which, in turn, affects future offending. Predictive profiling relies on past behaviour being a good indicator of future conduct. Is this a fair assumption? Occupational psychologists say past behaviour is a reliable predictor of future performance – which is why they design job selection around it. Unlike financial instruments which warn against assuming future returns from past rewards, human behaviour does have a perennial quality. Leopards and spots come to mind. ... Even if the data could reliably tell us who will be charged with, prosecuted for and convicted of which specific offence in the future, what should the police do about it now? Implant a biometric chip and have them under perpetual surveillance to stop them doing what they probably didn’t know they were going to do? Fine or imprison them? (how much, for how long?). What standard of proof will the AI apply to its predictions? Beyond a reasonable doubt? How will we measure the accuracy of the process? 


CISOs battle security platform fatigue

“Adopting more security tools doesn’t guarantee better cybersecurity,” says Jonathan Gill, CEO at Panaseer. “These tools can only report on what they can see – but they don’t know what they’re missing.” This fragmented visibility leaves security leaders making high-stakes decisions based on partial information. Without a verified, comprehensive system of record for all assets and security controls, many organizations are operating under what Gill calls an “illusion of visibility.” “Without a true denominator,” he explains, “CISOs are unable to confidently assess coverage gaps or prove compliance with evolving regulatory demands.” And those blind spots aren’t just theoretical. Every overlooked asset or misconfigured control becomes an open door for attackers — and they’re getting better at finding them. “Each of these coverage gaps represents risk,” Gill warns, “and they are increasingly easy for attackers to find and exploit.” The lack of clear visibility also muddies accountability. “This creates dark corners that go overlooked – servers and applications are left without owners, making it hard to assign responsibility for fixing issues,” Gill says. Even when gaps are known, security teams often find themselves drowning in data from too many tools, struggling to separate signal from noise. 

Daily Tech Digest - April 04, 2025


Quote for the day:

“Going into business for yourself, becoming an entrepreneur, is the modern-day equivalent of pioneering on the old frontier.” -- Paula Nelson



Hyperlight Wasm points to the future of serverless

WebAssembly support significantly expands the range of supported languages for Hyperlight, ensuring that compiled languages as well as interpreted ones like JavaScript can be run on a micro VM. Your image does get more complex here, as you need to bundle an additional runtime in the Hyperlight image, along with writing code that loads both runtime and application as part of the launch process. ... There’s a lot of work going on in the WebAssembly community to define a specification for a component model. This is intended to be a way to share binaries and libraries, allowing code to interoperate easily. The Hyperlight Wasm tool offers the option of compiling a development branch with support for WebAssembly Components, though it’s not quite ready for prime time. In practice, this will likely be the basis for any final build of the platform, as the specification is being driven by the main WebAssembly platforms. One point that Microsoft makes is that Wasm isn’t only language-independent, it’s architecture-independent, working against a minimal virtual machine. So, code written and developed on an x64 architecture system will run on Arm64 and vice versa, ensuring portability and allowing service providers to move applications to any spare capacity, no matter the host virtual machine.


Beyond SIEM: Embracing unified XDR for smarter security

Implementing SIEM solutions can have challenges and has to be managed proactively. Configuring the SIEM system can be very complex where any error can lead to false positives or missed threats. Integrating SIEM tools with existing security tools and systems is not easy. The implementation and maintenance processes are also resource-intensive and require significant time and manpower. Alert fatigue can be set with traditional SIEM platforms where numerous alerts are generated making it rather difficult to identify the genuine ones. ... For industries with stringent compliance requirements, such as finance and healthcare, SIEM remains a necessity due to its log retention, compliance reporting, and event correlation capabilities. Microsoft Sentinel’s AI-driven analytics help security teams fine-tune alerts, reducing false positives and increasing threat detection accuracy. Microsoft Defender XDR platform offers, Unified visibility across attack surfaces, CTEM Exposure management solution, CIS framework assessment, Zero Trust, EASM, AI-driven automated response to threats, Integrated security across all Microsoft 365 and third-party platforms, Office, Email, Data, CASB, Endpoint, Identity, and Reduced complexity by eliminating the need for custom configurations. 


Compliance Without Chaos: Build Resilient Digital Operations

A unified platform makes service ownership a no-brainer by directly connecting critical services to the right responders so there’s no scrambling when things go sideways. Teams can set up services quickly and at scale, making it easier to get a real-time pulse on system health and see just how far the damage spreads when something breaks. Instead of chasing down data across a dozen monitoring tools, everything is centralized in one place for easy analysis. ... With all data centralized in a unified platform, the classification and reporting of incidents is far easier with accessible and detailed incident logs that provide a clear audit trail. Sophisticated platforms also integrate with IT service management (ITSM) and IT operations (ITOps) tools to simplify the reporting of incidents based on predefined criteria. ... Every incident, both real and simulated, should be viewed as a learning opportunity. Aggregating data from disparate tools into a single location gives teams a full picture of how their organization’s operations have been affected and supplies a narrative for reporting. Teams can then uncover patterns across tools, teams and time to drive continuous learning in post-incident reviews. Coupled with regular, automated testing of disaster recovery runbooks, teams can build greater confidence in their system’s resilience.


How Organizations Can Benefit From Intelligent Data Infra

The first is getting your enterprise data AI-ready. Predictive AI has been around for a long time. But teams still spend a significant amount of time identifying and cleaning data, which involves handling ETL pipelines, transformations and loading data into data lakes. This is the most expensive step. The same process applies to unstructured data in generative AI. But organizations still need to identify the files and object streams that need to be a part of the training datasets. Organizations need to securely bring them together and load them into feature stores. That's our approach to data management. ... There's a lot of intelligence tied to files and objects. Without that, they will continue to be seen as simple storage entities. With embedded intelligence, you get detection capabilities that let you see what's inside a file and when it was last modified. For instance, if you create embeddings from a PDF file and vectorize them, imagine doing the same for millions of files, which is typical in AI training. This consumes significant computing resources. You don't want to spend compute resources while recreating embeddings on a million files every time there is a modification to the files. Metadata allows us to track changes and only reprocess the files that have been modified. This differential approach optimizes compute cycles.


Tariff war throws building of data centers into disarray

The potentially biggest variable affecting data center strategy is timing. Depending on the size of an enterprise data center and its purpose, it could take as little as six months to build, or as much as three years. Planning for a location is daunting when ever-changing tariffs and retaliatory tariffs could send costs soaring. Another critical element is knowing when those tariffs will take effect, a data point that has also been changing. Some enterprises are trying to sidestep the tariff issues by purchasing components in bulk, in enough quantities to potentially last a few years. ... “It’s not only space, available energy, cooling, and water resources, but it’s also a question of proximity to where the services are going to be used,” Nguyen said. Finding data center personnel, Nguyen said, is becoming less of an issue, thanks to the efficiencies gained through automation. “The level of automation available means that although personnel costs can be a bit more [in different countries], the efficiencies used means that [hiring people] won’t be the drag that it used to be,” he said. Given the vast amount of uncertainty, enterprise IT leaders wrestling with data center plans have some difficult decisions to make, mostly because they will have to guess where the tariff wars will be many months or years in the future, a virtually impossible task.


The Modern Data Architecture: Unlocking Your Data's Full Potential

If the Data Cloud is your engine, the CDP is your steering wheel—directing that power where it needs to go, precisely when it needs to get there. True real-time CDPs have the ability to transform raw data into immediate action across your entire technology ecosystem, with an event-based architecture that responds to customer signals in milliseconds rather than minutes. This ensures you can dynamically personalize experiences as they unfold—whether during a website visit, mobile app session, or contact center interaction–all while honoring consent. ... As AI capabilities evolve, this Intelligence Layer becomes increasingly autonomous—not just providing recommendations but taking appropriate actions based on pre-defined business rules and learning from outcomes to continuously improve its performance. ... The Modern Data Architecture serves as the foundation for truly intelligent customer experiences by making AI implementations both powerful and practical. By providing clean, unified data at scale, these architectures enable AI systems to generate more accurate predictions, more relevant recommendations, and more natural conversational experiences. Rather than creating isolated AI use cases, forward-thinking organizations are embedding intelligence throughout the customer journey. 


Why AI therapists could further isolate vulnerable patients instead of easing suffering

While chatbots can be programmed to provide some personalised advice, they may not be able to adapt as effectively as a human therapist can. Human therapists tailor their approach to the unique needs and experiences of each person. Chatbots rely on algorithms to interpret user input, but miscommunication can happen due to nuances in language or context. For example, chatbots may struggle to recognise or appropriately respond to cultural differences, which are an important aspect of therapy. A lack of cultural competence in a chatbot could alienate and even harm users from different backgrounds. So while chatbot therapists can be a helpful supplement to traditional therapy, they are not a complete replacement, especially when it comes to more serious mental health needs. ... The talking cure in psychotherapy is a process of fostering human potential for greater self-awareness and personal growth. These apps will never be able to replace the therapeutic relationship developed as part of human psychotherapy. Rather, there’s a risk that these apps could limit users’ connections with other humans, potentially exacerbating the suffering of those with mental health issues – the opposite of what psychotherapy intends to achieve.


Breaking Barriers in Conversational BI/AI with a Semantic Layer

The push for conversational BI was met with adoption inertia. Two major challenges have hindered its potential—the accuracy of the data insights and the speed at which the interface could provide the answers that were sought. This can be attributed to the inherent complexity of data architecture, which involves fragmented data in disparate systems with varying definitions, formats, and contexts. Without a unified structure, even the most advanced AI models risk delivering contextually irrelevant, inconsistent, or inaccurate results. Moreover, traditional data pipelines are not designed for instantaneous query resolution and resolving data from multiple tables, which delays responses. ... Large language models (LLMs) like GPT excel at interpreting natural language but lack the domain-specific knowledge of a data set. A semantic layer can resolve this challenge by acting as an intermediary between raw data and the conversational interface. It unifies data into a consistent, context-aware model that is comprehensible to both humans and machines. Retrieval-augmented generation (RAG) techniques are employed to combine the generative power of LLMs with the retrieval capabilities of structured data systems. 


The rise of AI PCs: How businesses are reshaping their tech to keep up

Companies are discovering that if they want to take full advantage of AI and run models locally, they need to upgrade their employees' laptops. This realization has introduced a hardware revolution, with the desire to update tech shifting from an afterthought to a priority and attracting significant investment from companies. ... running models locally gives organizations more control over their information and reduces reliance on third-party services. That setup is crucial for companies in financial services, healthcare, and other industries where privacy is a big concern or a regulatory requirement. "For them, on-device AI computer, it's not a nice to have; it's a need to have for fiduciary and HIPAA reasons, respectively," said Mike Bechtel, managing director and the chief futurist at Deloitte Consulting LLP. Another advantage is that local running reduces lag and creates a smoother user experience, which is especially valuable for optimizing business applications. ... As more companies get in on the action and AI-capable computers become ubiquitous, the premium price of AI PCs will continue to drop. Furthermore, Flower said the potential gains in performance offset any price differences. "In those high-value professions, the productivity gain is so significant that whatever small premium you're paying for that AI-enhanced device, the payback will be nearly immediate," said Flower.


Many CIOs operate within a culture of fear

The culture of fear often stems from a few roots, including a lack of accountability from employees who don’t understand their roles, and mistrust of coworkers and management, says Alex Yarotsky, CTO at Hubstaff, vendor of a time tracking and workforce management tool. In both cases, company leadership is to blame. Good leaders create a positive culture laid out in a set of rules and guidelines for employees to follow, and then model those actions themselves, Yarotsky says. “Any case of misunderstanding or miscommunication is always on the management because the management is the force in the company that sets the rules and drives the culture,” he adds. ... Such a culture often starts at the top, says Jack Allen, CEO and chief Salesforce architect at ITequality, a Salesforce consulting firm. Allen experienced this scenario in the early days of building a career, suggesting the problems may be bigger than the survey respondents indicate. “If the leader is unwilling to admit mistakes or punishes mistakes in an unfair way, then the next layer of leadership will be afraid to admit mistakes as well,” Allen says. ... Cultivating a culture of fear leads to several problems, including an inability to learn from mistakes, Mort says. “Organizations that do the best are those that value learning and highlight incidents as valuable learning events,” he says.

Daily Tech Digest - April 03, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart


Veterans are an obvious fit for cybersecurity, but tailored support ensures they succeed

Both civilian and military leaders have long seen veterans as strong candidates for cybersecurity roles. The National Initiative for Cybersecurity Careers and Studies, part of the US Cybersecurity and Infrastructure Security Agency (CISA), speaks directly to veterans, saying “Your skills and training from the military translate well to a cyber career.” NICCS continues, “Veterans’ backgrounds in managing high-pressure situations, attention to detail, and understanding of secure communications make them particularly well-suited for this career path.” Gretchen Bliss, director of cybersecurity programs at the University of Colorado at Colorado Springs (UCCS), speaks specifically to security execs on the matter: “If I were talking to a CISO, I’d say get your hands on a veteran. They understand the practical application piece, the operational piece, they have hands-on experience. They think things through, they know how to do diagnostics. They already know how to tackle problems.” ... And for veterans who haven’t yet mastered all that, Andrus advises “networking with people who actually do the job you want.” He also advises veterans to learn about the environment at the organization they seek to join, asking themselves whether they’d fit in. And he recommends connecting with others to ease the transition.


The 6 disciplines of strategic thinking

A strategic thinker is not just a good worker who approaches a challenge with the singular aim of resolving the problem in front of them. Rather, a strategic thinker looks at and elevates their entire ecosystem to achieve a robust solution. ... The first discipline is pattern recognition. A foundation of strategic thinking is the ability to evaluate a system, understand how all its pieces move, and derive the patterns they typically form. ... Watkins’s next discipline, and an extension of pattern recognition, is systems analysis. It is easy to get overwhelmed when breaking down the functional elements of a system. A strategic thinker avoids this by creating simplified models of complex patterns and realities. ... Mental agility is Watkins’s third discipline. Because the systems and patterns of any work environment are so dynamic, leaders must be able to change their perspective quickly to match the role they are examining. Systems evolve, people grow, and the larger picture can change suddenly. ... Structured problem-solving is a discipline you and your team can use to address any issue or challenge. The idea of problem-solving is self-explanatory; the essential element is the structure. Developing and defining a structure will ensure that the correct problem is addressed in the most robust way possible.


Why Vendor Relationships Are More Important Than Ever for CIOs

Trust is the necessary foundation, which is built through open communication, solid performance, relevant experience, and proper security credentials and practices. “People buy from people they trust, no matter how digital everything becomes,” says Thompson. “That human connection remains crucial, especially in tech where you're often making huge investments in mission-critical systems.” ... An executive-level technology governance framework helps ensure effective vendor oversight. According to Malhotra, it should consist of five key components, including business relationship management, enterprise technology investment, transformation governance, value capture and having the right culture and change management in place. Beneath the technology governance framework is active vendor governance, which institutionalizes oversight across ten critical areas including performance management, financial management, relationship management, risk management, and issues and escalations. Other considerations include work order management, resource management, contract and compliance, having a balanced scorecard across vendors and principled spend and innovation.


Shadow Testing Superpowers: Four Ways To Bulletproof APIs

API contract testing is perhaps the most immediately valuable application of shadow testing. Traditional contract testing relies on mock services and schema validation, which can miss subtle compatibility issues. Shadow testing takes contract validation to the next level by comparing actual API responses between versions. ... Performance testing is another area where shadow testing shines. Traditional performance testing usually happens late in the development cycle in dedicated environments with synthetic loads that often don’t reflect real-world usage patterns. ... Log analysis is often overlooked in traditional testing approaches, yet logs contain rich information about application behavior. Shadow testing enables sophisticated log comparisons that can surface subtle issues before they manifest as user-facing problems. ... Perhaps the most innovative application of shadow testing is in the security domain. Traditional security testing often happens too late in the development process, after code has already been deployed. Shadow testing enables a true shift left for security by enabling dynamic analysis against real traffic patterns. ... What makes these shadow testing approaches particularly valuable is their inherently low-maintenance nature. 


Rethinking technology and IT's role in the era of agentic AI and digital labor

Rethinking technology and the role of IT will drive a shift from the traditional model to a business technology-focused model. One example will be the shift from one large, dedicated IT team that traditionally handles an organization's technology needs, overseen and directed by the CIO, to more focused IT teams that will perform strategic, high-value activities and help drive technology innovation strategy as Gen AI handles many routine IT tasks. Another shift will be spending and budget allocations. Traditionally, CIOs manage the enterprise IT budget and allocation. In the new model, spending on enterprise-wide IT investments continues to be assessed and guided by the CIO, and some enterprise technology investments are now governed and funded by the business units. ... Today, agentic AI is not just answering questions -- it's creating. Agents take action autonomously. And it's changing everything about how technology-led enterprises must design, deploy, and manage new technologies moving forward. We are building self-driving autonomous businesses using agentic AI where humans and machines work together to deliver customer success. However, giving agency to software or machines to act will require a new currency. Trust is the new currency of AI.


From Chaos to Control: Reducing Disruption Time During Cyber Incidents and Breaches

Cyber disruptions are no longer isolated incidents; they have ripple effects that extend across industries and geographic regions. In 2024, two high-profile events underscored the vulnerabilities in interconnected systems. The CrowdStrike IT outage resulted in widespread airline cancellations, impacting financial markets and customer trust, while the Change Healthcare ransomware attack disrupted claims processing nationwide, costing billions in financial damages. These cases emphasize why resilience professionals must proactively integrate automation and intelligence into their incident response strategies. ... Organizations need structured governance models that define clear responsibilities before, during, and after an incident. AI-driven automation enables proactive incident detection and streamlined responses. Automated alerts, digital action boards, and predefined workflows allow teams to act swiftly and decisively, reducing downtime and minimizing operational losses. Data is the foundation of effective risk and resilience management. When organizations ensure their data is reliable and comprehensive, they gain an integrated view that enhances visibility across business continuity, IT, and security teams. 


What does an AI consultant actually do?

AI consulting involves advising on, designing and implementing artificial intelligence solutions. The spectrum is broad, ranging from process automation using machine learning models to setting up chatbots and performing complex analyses using deep learning methods. However, the definition of AI consulting goes beyond the purely technical perspective. It is an interdisciplinary approach that aligns technological innovation with business requirements. AI consultants are able to design technological solutions that are not only efficient but also make strategic sense. ... All in all, both technical and strategic thinking is required: Unlike some other technology professions, AI consulting not only requires in-depth knowledge of algorithms and data processing, but also strategic and communication skills. AI consultants talk to software development and IT departments as well as to management, product management or employees from the relevant field. They have to explain technical interrelations clearly and comprehensibly so that the company can make decisions based on this knowledge. Since AI technologies are developing rapidly, continuous training is important. Online courses, boot camps and certificates as well as workshops and conferences. 


Building a cybersecurity strategy that survives disruption

The best strategies treat resilience as a core part of business operations, not just a security add-on. “The key to managing resilience is to approach it like an onion,” says James Morris, Chief Executive of The CSBR. “The best strategy is to be effective at managing the perimeter. This approach will allow you to get a level of control on internal and external forces which are key to long-term resilience.” That layered thinking should be matched by clearly defined policies and procedures. “Ensure that your ‘resilience’ strategy and policies are documented in detail,” Morris advises. “This is critical for response planning, but also for any legal issues that may arise. If it’s not documented, it doesn’t happen.” ... Move beyond traditional monitoring by implementing advanced, behaviour-based anomaly detection and AI-driven solutions to identify novel threats. Invest in automation to enhance the efficiency of detection, triage, and initial response tasks, while orchestration platforms enable coordinated workflows across security and IT tools, significantly boosting response agility. ... A good strategy starts with the idea that stuff will break. So you need things like segmentation, backups, and backup plans for your backup plans, along with alternate ways to get back up and running. Fast, reliable recovery is key. Just having backups isn’t enough anymore.


3 key features in Kong AI Gateway 3.10

For teams working with sensitive or regulated data, protecting personally identifiable information (PII) in AI workflows is not optional, it’s essential for proper governance. Developers often use regex libraries or handcrafted filters to redact PII, but these DIY solutions are prone to error, inconsistent enforcement, and missed edge cases. Kong AI Gateway 3.10 introduces out-of-the-box PII sanitization, giving platform teams a reliable, enterprise-grade solution to scrub sensitive information from prompts before they reach the model. And if needed, reinserting sanitized data in the response before it returns to the end user. ... As organizations adopt multiple LLM providers and model types, complexity can grow quickly. Different teams may prefer OpenAI, Claude, or open-source models like Llama or Mistral. Each comes with its own SDKs, APIs, and limitations. Kong AI Gateway 3.10 solves this with universal API support and native SDK integration. Developers can continue using the SDKs they already rely on (e.g., AWS, Azure) while Kong translates requests at the gateway level to interoperate across providers. This eliminates the need for rewriting app logic when switching models and simplifies centralized governance. This latest release also includes cost-based load balancing, enabling Kong to route requests based on token usage and pricing. 


The future of IT operations with Dark NOC

From a Managed Service Provider (MSP) perspective, Dark NOC will shift the way IT operates today by making it more efficient, scalable, and cost-effective. It will replace Traditional NOC’s manual-intensive task of continuous monitoring, diagnosing, and resolving issues across multiple customer environments. ... Another key factor that Dark NOC enables MSPs is scalability. Its analytics and automation capability allows it to manage thousands of endpoints effortlessly without proportionally increasing engineers’ headcount. This enables MSPs to extend their service portfolios, onboard new customers, and increase profit margins while retaining a lean operational model. From a competitive point of view, adopting Dark NOC enables MSPs to differentiate themselves from competitors by offering proactive, AI-driven IT services that minimise downtime, enhance security and maximise performance. Dark NOC helps MSPs provide premium service at affordable price points to customers while making a decent margin internally. ... Cloud infrastructure monitoring & management (Provides real-time cloud resource monitoring and predictive insights). Examples include AWS CloudWatch, Azure Monitor, and Google Cloud Operations Suite.

Daily Tech Digest - April 02, 2025


Quote for the day:

"People will not change their minds but they will make new decisions based upon new information." -- Orrin Woodward


The smart way to tackle data storage challenges

Data intelligence makes data stored on the X10000 ready for AI applications to use as soon as they are ingested. The company has a demo of this, where the X10000 ingests customer support documents and enables users to instantly ask it relevant natural language questions via a locally hosted version of the DeepSeek LLM. This kind of application wouldn’t be possible with low-speed legacy object storage, says the company. The X10000’s all-NVMe storage architecture helps to support low-latency access to this indexed and vectorized data, avoiding front-end caching bottlenecks. Advances like these provide up to 6x faster performance than the X10000’s leading object storage competitors, according to HPE’s benchmark testing. ... The containerized architecture opens up options for inline and out-of-band software services, such as automated provisioning and life cycle management of storage resources. It is also easier to localize a workload’s data and compute resources, minimizing data movement by enabling workloads to process data in place rather than moving it to other compute nodes. This is an important performance factor in low-latency applications like AI training and inference. Another aspect of container-based workloads is that all workloads can interact with the same object storage layer. 


Talent gap complicates cost-conscious cloud planning

The top strategy so far is what one enterprise calls the “Cloud Team.” You assemble all your people with cloud skills, and your own best software architect, and have the team examine current and proposed cloud applications, looking for a high-level approach that meets business goals. In this process, the team tries to avoid implementation specifics, focusing instead on the notion that a hybrid application has an agile cloud side and a governance-and-sovereignty data center side, and what has to be done is push functionality into the right place. ... To enterprises who tried the Cloud Team, there’s also a deeper lesson. In fact, there are two. Remember the old “the cloud changes everything” claim? Well, it does, but not the way we thought, or at least not as simply and directly as we thought. The economic revolution of the cloud is selective, a set of benefits that has to be carefully fit to business problems in order to deliver the promised gains. Application development overall has to change, to emphasize a strategic-then-tactical flow that top-down design always called for but didn’t always deliver. That’s the first lesson. The second is that the kinds of applications that the cloud changes the most are applications we can’t move there, because they never got implemented anywhere else.


Your smart home may not be as secure as you think

Most smart devices rely on Wi-Fi to communicate. If these devices connect to an unsecured or poorly protected Wi-Fi network, they can become an easy target. Unencrypted networks are especially vulnerable, and hackers can intercept sensitive data, such as passwords or personal information, being transmitted from the devices. ... Many smart devices collect personal data—sometimes more than users realize. Some devices, like voice assistants or security cameras, are constantly listening or recording, which can lead to privacy violations if not properly secured. In some cases, manufacturers don’t encrypt or secure the data they collect, making it easier for malicious actors to exploit it. ... Smart home devices often connect to third-party platforms or other devices. These integrations can create security holes if the third-party services don’t have strong protections in place. A breach in one service could give attackers access to an entire smart home ecosystem. To mitigate this risk, it’s important to review the security practices of any third-party service before integrating it with your IoT devices. ... If your devices support it, always enable 2FA and link your accounts to a reliable authentication app or your mobile number. You can use 2FA with smart home hubs and cloud-based apps that control IoT devices.


Beyond compensation—crafting an experience that retains talent

Looking ahead, the companies that succeed in attracting and retaining top talent will be those that embrace innovation in their Total Rewards strategies. AI-driven personalization is already changing the game—organizations are using AI-powered platforms to tailor benefits to individual employee needs, offering a menu of options such as additional PTO, learning stipends, or wellness perks. Similarly, equity-based compensation models are evolving, with some businesses exploring cryptocurrency-based rewards and fractional ownership opportunities. Sustainability is also becoming a key factor in Total Rewards. Companies that incorporate sustainability-linked incentives, such as carbon footprint reduction rewards or volunteer days, are seeing higher engagement and satisfaction levels. ... Total Rewards is no longer just about compensation—it’s about creating an ecosystem that supports employees in every aspect of their work and life. Companies that adopt the VALUE framework—Variable pay, Aligned well-being benefits, Learning and growth opportunities, Ultimate flexibility, and Engagement-driven recognition—will not only attract top talent but also foster long-term loyalty and satisfaction.


Bridging the Gap Between the CISO & the Board of Directors

Many executives, including board members, may not fully understand the CISO's role. This isn't just a communications gap; it's also an opportunity to build relationships across departments. When CISOs connect security priorities to broader business goals, they show how cybersecurity is a business enabler rather than just an operational cost. ... Often, those in technical roles lack the ability to speak anything other than the language of tech, making it harder to communicate with board members who don't hold tech or cybersecurity expertise. I remember presenting to our board early into my CISO role and, once I was done, seeing some blank stares. The issue wasn't that they didn't care about what I was saying; we just weren't speaking the same language. ... There are many areas in which communication between a board and CISO is important — but there may be none more important than compliance. Data breaches today are not just technical failures. They carry significant legal, financial, and reputational consequences. In this environment, regulatory compliance isn't just a box to check; it's a critical business risk that CISOs must manage, particularly as boards become more aware of the business impact of control failures in cybersecurity.


What does a comprehensive backup strategy look like?

Though backups are rarely needed, they form the foundation of disaster recovery. Milovan follows the classic 3-2-1 rule: three data copies, on two different media types, with one off-site copy. He insists on maintaining multiple copies “just in case.” In addition, NAS users need to update their OS regularly, Synology’s Alexandra Bejan says. “Outdated operating systems are particularly vulnerable there.” Bejan emphasizes the positives from implementing the textbook best practices Ichthus employs. ... One may imagine that smaller enterprises make for easier targets due to their limited IT. However, nothing could be further from the truth. Bejan: “We have observed that the larger the enterprise, the more difficult it is to implement a comprehensive data protection strategy.” She says the primary reason for this lies in the previously fragmented investments in backup infrastructure, where different solutions were procured for various workloads. “These legacy solutions struggle to effectively manage the rapidly growing number of workloads and the increasing data size. At the same time, they require significant human resources for training, with steep learning curves, making self-learning difficult. When personnel are reassigned, considerable time is needed to relearn the system.”


Malicious actors increasingly put privileged identity access to work across attack chains

Many of these credentials are extracted from computers using so-called infostealer malware, malicious programs that scour the operating system and installed applications for saved usernames and passwords, browser session tokens, SSH and VPN certificates, API keys, and more. The advantage of using stolen credentials for initial access is that they require less skill compared to exploiting vulnerabilities in publicly facing applications or tricking users into installing malware from email links or attachments — although these initial access methods remain popular as well. ... “Skilled actors have created tooling that is freely available on the open web, easy to deploy, and designed to specifically target cloud environments,” the Talos researchers found. “Some examples include ROADtools and AAAInternals, publicly available frameworks designed to enumerate Microsoft Entra ID environments. These tools can collect data on users, groups, applications, service principals, and devices, and execute commands.” These are often coupled with techniques designed to exploit the lack of MFA or incorrectly configured MFA. For example, push spray attacks, also known as MFA bombing or MFA fatigue, rely on bombing the user with MFA push notifications on their phones until they get annoyed and approve the login thinking it’s probably the system malfunctioning.


Role of Blockchain in Enhancing Cybersecurity

At its core, a blockchain is a distributed ledger in which each data block is cryptographically connected to its predecessor, forming an unbreakable chain. Without network authorization, modifying or removing data from a blockchain becomes exceedingly difficult. This ensures that conventional data records stay consistent and accurate over time. The architectural structure of blockchain plays a critical role in protecting data integrity. Every single transaction is time-stamped and merged into a block, which is then confirmed and sealed through consensus. This process provides an undeniable record of all activities, simplifying audits and boosting confidence in system reliability. Similarly, blockchain ensures that every financial transaction is correctly documented and easily accessible. This innovation helps prevent record manipulation, double-spending, and other forms of fraud. By combining cryptographic safeguards with a decentralized architecture, it offers an ideal solution to information security. It also significantly reduces risks related to data breaches, hacking, and unauthorized access in the digital realm. Furthermore, blockchain strengthens cybersecurity by addressing concerns about unauthorized access and the rising threat of cyberattacks. 


Thriving in the Second Wave of Big Data Modernization

When businesses want to use big data to power AI solutions – as opposed to the more traditional types of analytics workloads that predominated during the first wave of big data modernization–the problems stemming from poor data management snowball. They transform from mere annoyances or hindrances into show stoppers. ... But in the age of AI, this process would likely instead entail giving the employee access to a generative AI tool that can interpret a question formulated using natural language and generate a response based on the organizational data that the AI was trained on. In this case, data quality or security issues could become very problematic. ... Unfortunately, there is no magic bullet that can cure the types of issues I’ve laid out above. A large part of the solution involves continuing to do the hard work of improving data quality, erecting effective access controls and making data infrastructure even more scalable. As they do these things, however, businesses must pay careful attention to the unique requirements of AI use cases. For example, when they create security controls, they must do so in ways that are recognizable to AI tools, such that the tools will know which types of data should be accessible to which users.


The DevOps Bottleneck: Why IaC Orchestration is the Missing Piece

At the end of the day, instead of eliminating operational burdens, many organizations just shifted them. DevOps, SREs, CloudOps—whatever you call them—these teams still end up being the gatekeepers. They own the application deployment pipelines, infrastructure lifecycle management, and security policies. And like any team, they seek independence and control—not out of malice, but out of necessity. Think about it: If your job is to keep production stable, are you really going to let every dev push infrastructure changes willy-nilly? Of course not. The result? Silos of unique responsibility and sacred internal knowledge. The very teams that were meant to empower developers become blockers instead. ... IaC orchestration isn’t about replacing your existing tools; it’s about making them work at scale. Think about how GitHub changed software development. Version control wasn’t new—but GitHub made it easier to collaborate, review code, and manage contributions without stepping on each other’s work. That’s exactly what orchestration does for IaC. It allows large teams to manage complex infrastructure without turning into a bottleneck. It enforces guardrails while enabling self-service for developers. 

Daily Tech Digest - April 01, 2025


Quote for the day:

"Strategy is not really a solo sport _ even if you_re the CEO." -- Max McKeown


MCP: The new “USB-C for AI” that’s bringing fierce rivals together

So far, MCP has also garnered interest from multiple tech companies in a rare show of cross-platform collaboration. For example, Microsoft has integrated MCP into its Azure OpenAI service, and as we mentioned above, Anthropic competitor OpenAI is on board. Last week, OpenAI acknowledged MCP in its Agents API documentation, with vocal support from the boss upstairs. "People love MCP and we are excited to add support across our products," wrote OpenAI CEO Sam Altman on X last Wednesday. ... To make the connections behind the scenes between AI models and data sources, MCP uses a client-server model. An AI model (or its host application) acts as an MCP client that connects to one or more MCP servers. Each server provides access to a specific resource or capability, such as a database, search engine, or file system. When the AI needs information beyond its training data, it sends a request to the appropriate server, which performs the action and returns the result. To illustrate how the client-server model works in practice, consider a customer support chatbot using MCP that could check shipping details in real time from a company database. "What's the status of order #12345?" would trigger the AI to query an order database MCP server, which would look up the information and pass it back to the model. 


Why global tensions are a cybersecurity problem for every business

As global polarization intensifies, cybersecurity threats have become increasingly hybridized, complicating the landscape for threat attribution and defense. Michael DeBolt, Chief Intelligence Officer at Intel 471, explains: “Increasing polarization worldwide has seen the expansion of the state-backed threat actor role, with many established groups taking on financially motivated responsibilities alongside their other strategic goals.” This evolution is notably visible in threat actors tied to countries such as China, Iran, and North Korea. According to DeBolt, “Heightened geopolitical tensions have reflected this transition in groups originating from China, Iran, and North Korea over the last couple of years—although the latter is somewhat more well-known for its duplicitous activity that often blurs the line of more traditional e-crime threats.” These state-backed groups increasingly blend espionage and destructive attacks with financially motivated cybercrime techniques, complicating attribution and creating significant practical challenges for organizations. DeBolt highlights the implications: “A primary practical issue organizations are facing is threat attribution, with a follow-on issue being maintaining an effective security posture against these hybrid threats.”


How to take your first steps in AI without falling off a cliff

It is critical to bring all stakeholders on board through education and training on the fundamental building blocks of data and AI. This involves understanding what’s accessible in the market and differentiating between various AI technologies. Executive buy-in is crucial, and by planning for internal process outcomes first, organisations can better position themselves to achieve meaningful outcomes in the future. ... Don’t bite off more than you can chew! Trying to deploy a complex AI solution to the entire organisation is asking for trouble. It is better to identify early adopter departments where specific AI pilots and proofs of concept can be introduced and their value measured. Eventually, you might establish an AI assistant studio to develop dedicated AI tools for each use case according to individual needs. ... People are often wary of change, particularly change with such far reaching implications in terms of how we work. Clear communication, training, and ongoing support will all help reassure employees who fear being left behind. ... In the context of data and AI, the perspective shifts somewhat. Most organisations already have policies in place for public cloud adoption. However, the approach to AI and data must be more nuanced, given the vast potential of the technology involved. 


6 hard-earned tips for leading through a cyberattack — from CSOs who’ve been there

Authority under crisis is meaningless if you can’t establish followership. And this goes beyond the incident response team: CISOs must communicate with the entire organization — a commonly misunderstood imperative, says Pablo Riboldi, CISO of nearshore talent provider BairesDev. ... “Organizations should provide training on stress management and decision-making under pressure, which includes perhaps mental health support resources in the incident response plan,” Ngui says. Larry Lidz, vice president of CX Security at Cisco, also advocates for tabletop exercises as a way to get employees to “look at problems through a different set of lenses than they would otherwise look at them.” ... Remaining calm in the face of a cyberattack can be challenging, but prime performance requires it, New Relic’s Gutierrez says. “There’s a lot of reaction. There’s a lot of strong feelings and emotions that go on during incidents,” Gutierrez says. Although they had moments of not maintaining composure, Gutierrez says they have been generally calm under cyber duress, which they take pride in. Demonstrating composure as a leader under fire is important because it can influence how others feel, behave, and act.


A “Measured” Approach to Building a World-Class Offensive Security Program

First, mapping the top threats and threat actors, most likely to find your organization an attractive target. Second, the top “crown jewel” systems they would target for compromise. Remaining at the enterprise level, the next step is to establish an internal framework and underlying program that graphs threats and risks, and provides a repeatable mechanism to track and refresh that understanding over time. This includes graphs of all enterprise systems, and their associated connections and dependencies, as well as attack graphs that represent all the potential paths through your architecture that would lead an attacker to their prize. Finally, the third element is an architectural security review that discerns from the graphs what paths are most possible and probable. Installing a program that guides and tracks three activities will also pay dividends down the line in better informing and increasing the efficacy of adversarial simulations. We all know the devil resides in the details. At this stage we begin understanding the actual vulnerability of individual assets and systems. The first step is a comprehensive inventory of elements that exist across the organization. This includes internal endpoint assets, and external perimeter and cloud systems. As you’d likely expect, the next step is vulnerability scanning of the full asset inventory that was established. 


How AI Agents Are Quietly Transforming Frontend Development

Traditional developer tools are passive. You run a linter, and it tells you what’s wrong. You run a build tool, and it compiles. But AI agents are proactive. They don’t wait for instructions; they interpret high-level goals and try to execute them. Want to improve page performance? An agent can analyze your critical rendering path, optimize image sizes, and suggest lazy loading. Want a dark mode implemented across your UI library? It can crawl through your components and offer scoped changes that preserve brand integrity. ... Frontend development has always been plagued by complexity. Thousands of packages, constantly changing frameworks, and pixel-perfect demands from designers. AI agents bring sanity to the chaos, rendering cloud security the only thing to worry about. But if you decide to run an agent locally, that problem is resolved as well. They can serve as design-to-code translators, turning Figma files into functional components. They can manage breakpoints, ARIA attributes, and responsive behaviors automatically. They can even test components for edge cases by generating test scenarios that a developer might miss. Because these agents are always “on,” they notice patterns developers sometimes overlook. That dropdown menu that breaks on Safari 14? Flagged. That padding inconsistency between modals? Caught.


Agentic AI won’t make public cloud providers rich

Agentic AI isn’t what most people think it is. When I look at these systems, I see something fundamentally different from the brute-force AI approaches we’re accustomed to. Consider agentic AI more like a competent employee than a powerful calculator. What’s fascinating is how these systems don’t need centralized processing power. Instead, they operate more like distributed networks, often running on standard hardware and coordinating across different environments. They’re clever about using resources, pulling in specialized small language models when needed, and integrating with external services on demand. The real breakthrough isn’t about raw power—it’s about creating more intelligent, autonomous systems that can efficiently accomplish tasks. The big cloud providers emphasize their AI and machine learning capabilities alongside data management and hybrid cloud solutions, whereas agentic AI systems are likely to take a more distributed approach. These systems will integrate with large language models primarily as external services rather than core components. This architectural pattern favors smaller, purpose-built language models and distributed processing over centralized cloud resources. Ask me how I know. I’ve built dozens for my clients recently.


Cloud a viable choice amid uncertain AI returns

Enterprises can restrict data using internal controls and limit data movement to chosen geographical locations. The cluster can be customized and secured to meet the specific requirements of the enterprise without the constraints of using software or hardware configured and operated by a third party. Given these characteristics, for convenience, Uptime Institute has labeled the method as “best” in terms of customization and control. ... The challenge for enterprises is determining whether the added reassurance of dedicated infrastructure provides a real return on its substantial premium over the “better” option. Many large organizations - from financial services to healthcare - already use the public cloud to hold sensitive data. To secure data, an organization may encrypt data at rest and in transit, configure appropriate access controls, such as security groups, and set up alerts and monitoring. Many cloud providers have data centers approved for government use. It is unreasonable to view the cloud as inherently insecure or non-compliant, considering its broad use across many industries. Although dedicated infrastructure gives reassurance that data is being stored and processed at a particular location, it is not necessarily more secure or compliant than the cloud. 


Why no small business is too small for hackers - and 8 security best practices for SMBs

To be clear, the size of your business isn't particularly relevant to bulk attacks. It's merely the fact that you are one of many businesses that can be targeted through random IP number generation or email harvesting or some other process that makes it very, very cost-effective for a hacker to be able to deliver a piece of malware that opens up computers in your business for opportunistic activities. ... Attackers -- who could be affiliated with organized crime groups, individual hackers, or even teams funded by nation-states -- often use pre-built hacking tools they can deploy without a tremendous amount of research and development. For hackers, this tactic is roughly the equivalent of downloading an app from an app store, although the hacking tools are usually purchased or downloaded from hacker-oriented websites and hidden forums (what some folks call "the dark web"). ... "Many SMB owners assume cybersecurity is too costly or too complex and think they don't have the IT knowledge or resources to set up reliable security. Few realize that they could set up security in a half hour. Moreover, the lack of dedicated cyber staff further complicates the situation for SMBs, making it even more daunting to implement and manage effective security measures."


AI is making the software supply chain more perilous than ever

The software supply chain is a link in modern IT environments that is as crucial as it is vulnerable. The new research report by JFrog, released during KubeCon + CloudNativeCon Europe in London, shows that organizations are struggling with increasing threats that are amplified by, how could it be otherwise, the rise of AI. ... The report identifies a “quad-fecta” of threats to the integrity and security of the software supply chain: vulnerabilities (CVEs), malicious packages, exposed secrets and configuration errors/human error. JFrog’s research team detected no fewer than 25,229 exposed secrets and tokens in public repositories – an increase of 64% compared to last year. Worryingly, 27% of these exposed secrets were still active. This interwoven set of security dangers makes it particularly difficult for organizations to keep their digital walls consistently in order. ... “More is not always better,” the report states. The collection of tools can make organizations more vulnerable due to increased complexity for developers. At the same time, visibility in the programming code remains a problem: only 43% of IT professionals say that their organization applies security scans at both the code and binary level. This is a decrease from 56% compared to last year and indicates that teams still have large blind spots when identifying software risks.