Showing posts with label SAFe. Show all posts
Showing posts with label SAFe. Show all posts

Daily Tech Digest - January 15, 2026


Quote for the day:

"You have to have your heart in the business and the business in your heart." -- An Wang


AI agents can talk — orchestration is what makes them work together

“Agent-to-agent communications is emerging as a really big deal,” G2’s chief innovation officer Tim Sanders told VentureBeat. “Because if you don't orchestrate it, you get misunderstandings, like people speaking foreign languages to each other. Those misunderstandings reduce the quality of actions and raise the specter of hallucinations, which could be security incidents or data leakage.” ... In another critical evolution in the agentic era, human evaluators will become designers, moving from human-in-the-loop to human-on-the-loop, according to Sanders. That is: They will begin designing agents to automate workflows. Agent builder platforms continue to innovate their no-code solutions, Sanders said, meaning nearly anyone can now stand up an agent using natural language. “This will democratize agentic AI, and the super skill will be the ability to express a goal, provide context and envision pitfalls, very similar to a good people manager today.” ... Organizations should begin “expeditious programs” to infuse agents across workflows, especially with highly repetitive work that poses bottlenecks. Likely at first, there will be a strong human-in-the-loop element to ensure quality and promote change management. “Serving as an evaluator will strengthen the understanding of how these systems work,” Sanders said, “and eventually enable all of us to operate upstream in agentic workflows instead of downstream.”


Integrating AI-Enhanced Microservices in SAFe 5.0 Framework

AI-driven microservices can be a game-changer for Lean Portfolio Management within SAFe. By optimizing decision analytics and enhancing value stream performance, AI simplifies, rather than complicates. I know what you’re thinking: AI tools can add complexity. One client put this to the test, and we found AI helped reduce the noise. It sliced through the data smog to identify hidden value streams and automate mundane tasks like financial forecasting and risk management. ... Integrating decentralized AI models into SAFe’s ARTs can significantly enhance their autonomy. During a high-stakes project, we shifted from a centralized to a decentralized model, which allowed ARTs to self-optimize and adapt to shifting priorities seamlessly. It was like giving ARTs a brain of their own. Decentralized AI models reduce the bottlenecks you'd typically encounter in centralized systems. Think of the ARTs as small startups within the larger enterprise ecosystem, each capable of making swift, informed decisions. ... This isn’t just a tech enthusiast's dream—it's an emerging reality. The maturity of AI technologies spells a future where enterprises aren’t just keeping up; they’re setting the pace. So, if there’s a single, actionable insight to glean from my journey, it’s this: enterprises need to actively pursue cross-industry collaborations, invest in AI-powered microservices, and hone their Agile professionals’ skill sets.


Incorporating Geopolitical Risk Into Your IT Strategy

IT organizations know how to plan for unexpected outages, but even the most rigorously designed strategy is vulnerable to the shifting winds of geopolitics. CIOs and technology leaders need to know how their organizations will respond to geopolitical disruptions, and scenario planning needs to be a priority. ... "The IT department can treat geopolitical disruption as an expected operational variable rather than an unforeseen catastrophe. Good and tested enterprise risk management frameworks, investment in government affairs partnerships and ongoing board engagement should start to manage and prepare for this," Dixon said. CIOs need to do scenario modeling around the risks facing their enterprise, and evaluate how IT is teaming with business units, security teams and the CISO on a cohesive tech strategy that builds security, including artificial intelligence security, in from the ground up, said Sean Joyce ... "You're as strong as your weakest link," Joyce said. "As geopolitical risk becomes more prominent, you're going to see tools like cyber being leveraged by countries, particularly those that don't have stronger military or other capabilities. For some, it may be the only tool they can leverage." Physical infrastructure, geography and power supplies are also now areas of risk CIOs need to consider, and infrastructure strategy must align with sustainability, energy realities and geopolitical stability. 


Six Architecture Challenges for Startups

The risk is not that the first version is imperfect; that is inevitable. The risk is that the team keeps layering new functionality on top of an accidental architecture. At some point, the cost of change becomes so high that every small modification feels dangerous. The architectural challenge is to intentionally decide where to accept debt and where to invest in structure. Startups need a minimal set of principles – for example, clear domain boundaries, basic API hygiene, and a simple deployment model – that allow speed without locking the product into a dead end. ... If the product team is still validating pricing models, redefining the customer journey, or experimenting with different verticals, any rigid decomposition can turn into friction. Yet avoiding boundaries altogether leads to a “big ball of mud” that is equally hard to evolve. A practical approach is to use provisional boundaries based on current value streams – onboarding, transaction processing, analytics, etc. – and treat them as hypotheses. The challenge is not to find the perfect structure from day one, but to keep those boundaries explicit and adjustable as the business model evolves. ... Startups must make conscious decisions about where they are comfortable being tightly coupled to a provider and where they need portability. That requires viewing cloud services through a business lens: What is strategic IP, what is replaceable, and what is pure commodity? Aligning these categories with architectural choices is a non-trivial design challenge, not just a procurement decision. 


Platform-as-a-Product: Declarative Infrastructure for Developer Velocity

Without centralized guardrails, teams often compensate by over-allocating resources "to be safe", leading to inconsistent environments and unnecessary cloud spend that is only discovered after deployment. ... What is missing is a developer-friendly abstraction that brings these related concerns together. Developers need a way to express intent (not only what infrastructure is required, but also how the application should be built, deployed, configured across environments, secured, and sized) without having to implement the mechanics of each underlying system. From a platform engineering perspective, this abstraction represents the core of an internal developer platform and can be implemented as a lightweight Python-based platform framework. ... The platform comprises several interconnected components. GitLab pipelines coordinate everything, pulling code from repositories, building and unit testing applications (with tests written by developers), checking security, creating cloud infrastructure with Terraform/IaC, and deploying to Kubernetes clusters with Puppet configuration management. The configuration YAML file controls all of this, telling each component what to do. The architecture clearly separates concerns: the CI pipeline handles code building, testing, and vulnerability scanning. CD pipeline handles deployment: creating cloud resources, updating Kubernetes, and configuring environments. 


(Re)introducing Adaptive Business Continuity

Adaptive BC is designed to provide a framework that delivers better outcomes when organizations deal with losses. The result may be a reduction in documentation (something I greatly favor) but that is not a stated goal. ... My experience over the years has led me to conclude that trying to define priorities for the resumption of services is wasted effort. Many activities can take place in parallel, and priorities will change when disasters occur. A perfect example is the governmental lockdowns and health authority mandates that followed the emergence of COVID. The result is that demand for products and services changed drastically, upending previous priorities. Priorities may be defined following adaptive principles, but it is not at all a stated component of the Adaptive framework. ... For a number of reasons, I would like to see the word “plan” used a lot less within our profession. Seeing the word “strategy” in its place would be a step in the right direction. Strategy improvement is not, however, a key outcome of Adaptive BC efforts. There is some benefit to having clearly defined recovery strategies, but strategies only provide benefit to competent and empowered teams armed with the resources they need to carry out the mission. For this reason, I always emphasize the importance of focusing efforts on capabilities and consider plans and strategies as little more than supporting tools for any business continuity program. The improvement of strategies and/or plans is simply not an expected outcome of Adaptive BC work.


Exactly What To Automate With AI In 2026 For Faster Business Growth

Most founders automate the wrong things. They start with the flashy stuff, the complicated tools and fancy dashboards, while ignoring the repetitive tasks quietly draining their hours. But you need faster, cleaner growth by removing friction from the activities that actually grow your business. ... You shouldn't embark on a day's worth of admin tasks every time a new client says yes. It will only slow you down. Make it easy for them to pay, get a receipt, complete an onboarding form, and submit the required information. On your end, have the Google Drive folders, follow-up emails, and team briefings set up without you lifting a finger. Question everything you currently do manually. There is no reason it couldn't be an AI agent handling the sequence. All the tools you pay for already have integrations with each other; You're just not using them. The goal is that you could sign client after client because onboarding takes minutes, not hours. ... AI-generated content is awful when you use it wrong. But that doesn't mean you shouldn't involve AI in your content production process. Content still matters in marketing, whether long-form articles, videos, or social media visuals. You need to be part of the conversation, but only with relevant, authentic material. You cannot outproduce everyone manually, so use automations and retain your human genius for the finishing touches. ... The more your life admin runs on autopilot, the more you free up time and energy for your business. 


What is AI fuzzing? And what tools, threats and challenges generative AI brings

The way traditional fuzzing works is you generate a lot of different inputs to an application in an attempt to crash it. Since every application accepts inputs in different ways, that requires a lot of manual setups. Security testers would then run these tests against their companies’ software and systems to see where they might fail. ... Today, generative artificial intelligence has the potential to automate this previously manual process, coming up with more intelligent tests, and allowing more companies to do more testing of their systems. ... But there’s a third angle involved here. What if, instead of trying to break traditional software, the target was an AI-powered system? This creates unique challenges because AI chatbots are not predictable and can respond differently to the same input at different times. ... AI fuzzing can also help speed up the discovery of vulnerabilities, Roy says. “Traditionally, testing was always a function of how many days and weeks you had to test the system, and how many testers you could throw at the testing,” he says. “With AI, we can expand the scale of the testing.” ... Another use of AI in fuzzing is that it takes more than a set of test cases to fully test an application — you also need a mechanism, a harness, to feed the test cases into the app, and in all the nooks and crannies of the application. “If the fuzzing harness does not have good coverage, then you may not uncover vulnerabilities through your fuzzing,” says Dane Sherrets, staff innovations architect for emerging technologies at HackerOne


CISOs flag gaps in third-party risk management

CISOs rank third-party cyber risk among their highest-impact threats. Vendor relationships touch nearly every core business function, from cloud infrastructure and software development to data processing and AI services. Each added dependency expands the attack surface and increases the number of organizations involved in protecting sensitive systems and data. ... Only a small portion of organizations report visibility across third-, fourth-, and nth-party relationships. Most operate with partial insight limited to direct vendors or a narrow segment of the extended supply chain. CISOs say limited visibility complicates incident response, risk prioritization, and compliance planning. When a breach emerges several layers removed from a known vendor, security teams may struggle to understand exposure, timelines, and downstream impact. ... CISOs report rising regulatory scrutiny tied to third-party cyber risk. Regulatory frameworks place greater expectations on organizations to demonstrate oversight across vendor ecosystems, including indirect relationships. Only a minority of organizations feel ready to meet upcoming requirements without major changes. Most report progress underway, with further work needed to align processes, tooling, and internal coordination. Third-party risk management involves legal, procurement, compliance, and executive leadership alongside security teams. ... At the same time, AI adoption accelerates within vendor risk management itself. 


Anti-fragility – what is it and why should it be the goal for your organisation?

That ability to thrive in the face of disruption must become the basis for improved resilience. Modern organisations shouldn’t strive for survival, but for continual improvement. In the cyber sphere, that is crucial. Threat actors are constantly changing tack, targeting new CVEs, and executing increasingly complicated supply chain attacks. Resilience must therefore move in tandem as an ongoing process of learning and adapting. That is the crux of anti-fragility. It defines systems that thrive and improve from stress, volatility, disorder and shocks, rather than just resisting them. If a security model is only designed to recover, it remains just as vulnerable as before. But an anti-fragile approach actively benefits from each attack, identifying weaknesses, addressing them, and adapting as needed. ... Increasingly, organisations are recognising the value in anti-fragility as a strategy and more will adopt it next year. However, getting there means going beyond regulatory compliance. Compliance lays the foundations from which successful cybersecurity can be built, yet many currently see it as the finished structure. There are several problems with that. Security legislation frequently lags behind the threat landscape, and so the gap between a new threat emerging and a new law coming in to address it can stretch over the course of years. Organisations must therefore understand that compliance doesn’t equal protection. 

Daily Tech Digest - May 18, 2024

AI imperatives for modern talent acquisition

In talent acquisition, the journey ahead promises to be tougher than ever. Recruiters face a paradigm shift, moving beyond traditional notions of filling vacancies to addressing broader business challenges. The days of simply sourcing candidates are long gone; today's TA professionals must navigate complexities ranging from upskilling and reskilling to mobility and contracting. ... At the heart of it lies a structural shift reshaping the global workforce. Demographic trends, such as declining birth rates, paint a sobering picture of a world where there simply aren't enough people to fill available roles. This demographic drought isn't limited to a single region; it's a global phenomenon with far-reaching implications. Compounding this challenge is the changing nature of careers. No longer tethered to a single company, employees are increasingly empowered to seek out opportunities that align with their aspirations and values. This has profound implications for talent retention and development, necessitating a shift towards systemic HR strategies that prioritise upskilling, mobility, and employee experience.


Ineffective scaled agile: How to ensure agile delivers in complex systems

When developing a complex system it’s impossible to uncover every challenge even with the most in-depth upfront analysis. One way of dealing with this is by implementing governance that emphasizes incorporating customer feedback, active leadership engagement and responding to changes and learnings. Another challenge can arise when teams begin to embrace working autonomously. They start implementing local optimizations which can lead to inefficiencies. The key is that the governance approach should make sure that the overall work is broken down into value increments per domain and then broken down further into value increments per team in regular time intervals. This creates a shared sense of purpose across teams and guides them towards the same goal. Progress can then be tracked using the working system as the primary measure of progress. Those responsible for steering the overall program need to facilitate feedback and prioritization discussions, and should encourage the leadership to adapt to internal insights or changes in the external environment.


How to navigate your way to stronger cyber resilience

If an organization doesn’t have a plan for what to do if a security incident takes place, they risk finding themselves in the precarious position of not knowing how to react to events, and consequently doing nothing or the wrong thing. The report also shows that just over a third of the smaller companies worry that senior management doesn’t see cyberattacks as a significant risk. How can they get greater buy-in from their management team on the importance of cyber risks? It’s important to understand that this is not a question of management failure. It is hard for business leaders to engage with or care about something they don’t fully understand. The onus is on security professionals to speak in a language that business leaders understand. They need to be storytellers and be able to explain how to protect brand reputation through proactive, multi-faceted defense programs. Every business leader understands the concept of risk. If in doubt, present cybersecurity threats, challenges, and opportunities in terms of how they relate to business risk.


DDoS attacks: Definition, examples, and techniques

DDoS botnets are the core of any DDoS attack. A botnet consists of hundreds or thousands of machines, called zombies or bots, that a malicious hacker has gained control over. The attackers will harvest these systems by identifying vulnerable systems that they can infect with malware through phishing attacks, malvertising attacks, and other mass infection techniques. The infected machines can range from ordinary home or office PCs to DDoS devices—the Mirai botnet famously marshalled an army of hacked CCTV cameras—and their owners almost certainly don’t know they’ve been compromised, as they continue to function normally in most respects. The infected machines await a remote command from a so-called command-and-control server, which serves as a command center for the attack and is often itself a hacked machine. Once unleashed, the bots all attempt to access some resource or service that the victim makes available online. Individually, the requests and network traffic directed by each bot towards the victim would be harmless and normal. 


7 ways to use AI in IT disaster recovery

The integration of AI into IT disaster recovery is not just a trendy addition; it's a significant enhancement that can lead to quicker response times, reduced downtime and overall improved business continuity. By proactively identifying risks, optimizing resources and continuously learning from past incidents, AI offers a forward-thinking approach to disaster recovery that could be the difference between a minor IT hiccup and a significant business disruption. ... A significant portion of IT disasters are due to cyberthreats. AI and machine learning can help mitigate these issues by continuously monitoring network traffic, identifying potential threats and taking immediate action to mitigate risks. Most new cybersecurity businesses are using AI to learn about emerging threats. They also use AI to look at system anomalies and block questionable activity. ... AI can optimize the use of available resources, ensuring that critical functions receive the necessary resources first. This optimization can greatly increase the efficiency of the recovery process and help organizations working with limited resources.


Underwater datacenters could sink to sound wave sabotage

In a paper available on the arXiv open-access repository, the researchers detail how sound at a resonant frequency of the hard disk drives (HDDs) deployed in submerged enclosures can cause throughput reduction and even application crashing. HDDs are still widely used in datacenters, despite their obituary having been written many times, and are typically paired with flash-based SSDs. The researchers focused on hybrid and full-HDD architectures to evaluate the impact of acoustic attacks. The researchers found that sound at the right resonance frequency would induce vibrations in the read-write head and platter of the disks by vibration propagation, proportional to the acoustic pressure, or intensity of the sound. This affects the disk's read/write performance. For the tests, a Supermicro rack server configured with a RAID 5 storage array was placed inside a metal enclosure in two scenarios; an indoor laboratory water tank and an open-water testing facility, which was actually a lake on the Florida University campus. Sound was generated from an underwater speaker.


Agile Design, Lasting Impact: Building Data Centers for the AI Era

While there is a clear need for more data centers, the development timeline of building new, modern data centers incorporating these technologies and regulatory adaptations is currently between three to five years (more in some cases). And not just that, the fast pace at which technology is evolving means manufacturers are likely to face the need to rethink strategy and innovation mid-build to accommodate further advancements. ... This is a pivotal moment for our industry and what’s built today could influence what’s possible tomorrow. We’ve had successful adaptations before, but due to the current pace of evolution, future builds need to be able to accommodate retrofits to ensure they remain fit for purpose. It's crucial to strike a balance between meeting demand, adhering to regulations, and designing for adaptability and durability to stay ahead. We might see a rise in smaller, colocation data centers offering flexibility, reduced latency, and cost savings. At the same time, medium players could evolve into hyperscalers, with the right vision to build something suitable to exist in the next hype cycle.


Quantum internet inches closer: Qubits sent 22 miles via fiber optic cable

Even as the biggest names in the tech industry race to build fault-tolerant quantum computers, the transition from binary to quantum can only be completed with a reliable internet connection to transmit the data. Unlike binary bits transported as light signals inside a fiber optic cable that can be read, amplified, and transmitted over long distances, quantum bits (qubits) are fragile, and even attempting to read them changes their state. ... Researchers in the Netherlands, China, and the US separately demonstrated how qubits could be stored in “quantum memory” and transmitted over the fiber optic network. Ronald Hanson and his team at the Delft University of Technology in the Netherlands encoded qubits in the electrons of nitrogen atoms and nuclear states of carbon atoms of the small diamond crystals that housed them. An optical fiber cable traveled 25 miles from the university to another laboratory in Hague to establish a link with similarly embedded nitrogen atoms in diamond crystals.


Cyber resilience: Safeguarding your enterprise in a rapidly changing world

In an era defined by pervasive digital connectivity and ever-evolving threats, cyber resilience has become a crucial pillar of survival and success for modern-day enterprises. It represents an organisation’s capacity to not just withstand and recover from cyberattacks but also to adapt, learn, and thrive in the face of relentless and unpredictable digital challenges. ... Due to the crippling effects a cyberattack can have on a nation, governments and regulatory bodies are also working to develop guidelines and standards which encourage organisations to embrace cyber resilience. For instance, the European Parliament recently passed the European Cyber Resilience Act (CRA), a legal framework to describe the cybersecurity requirements for hardware and software products placed on the European market. It aims to ensure manufacturers take security seriously throughout a product’s lifecycle. In other regions, such as India, where cybersecurity adoption is comparatively evolving, the onus falls on industry leaders to work with governmental bodies and other enterprises to encourage the development and adoption of similar obligations. 


How to Build Large Scale Cyber-Physical Systems

There are several challenges in building hardware-reliant cyber-physical systems, such as hardware lead times, organisational structure, common language, system decomposition, cross-team communication, alignment, and culture. People engaged in the development of large-scale safety-critical systems need line of sight to business objectives, Yeman said. Each team should be able to connect their daily work to those objectives. Yeman suggested communicating the objectives through the intent and goals of the system as opposed to specific tasks. An example of an intent-based system objective would be to ensure the system can communicate to military platforms securely as opposed to specifically defining that the system must communicate via link-16, she added. Yeman advised breaking the system problem down into smaller solvable problems. With each of those problems resolve what is known first and then resolve the unknown through a series of experiments, she said. This approach allows you to iteratively and incrementally build a continuously validated solution.



Quote for the day:

"Remember, teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni