Quote for the day:
"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart
Software Supply Chain Risks: Lessons from Recent Attacks
Modern applications are complex tapestries woven from proprietary code,
open-source libraries, third-party APIs, and countless development tools. This
interconnected web is the software supply chain, and it has become one of the
most critical—and vulnerable—attack surfaces for organizations globally.
Supply chain attacks are particularly insidious because they exploit trust.
Organizations implicitly trust the code they import from reputable sources and
the tools their developers use daily. Attackers have recognized that it's
often easier to compromise a less-secure vendor or a widely-used open-source
project than to attack a well-defended enterprise directly. Once an attacker
infiltrates a supply chain, they gain a "force multiplier" effect. A single
malicious update can be automatically pulled and deployed by thousands of
downstream users, granting the attacker widespread access instantly. Recent
high-profile attacks have shattered the illusion of a secure perimeter,
demonstrating that a single compromised component can have catastrophic,
cascading effects. ... The era of blindly trusting software components is
over. The software supply chain has become a primary battleground for
cyberattacks, and the consequences of negligence are severe. By learning from
recent attacks and proactively implementing robust security measures like
SBOMs, secure pipelines, and rigorous vendor vetting, organizations can
significantly reduce their risk and build more resilient, trustworthy
software.Building Bridges, Not Barriers: The Case for Collaborative Data Governance
The collaborative data governance model preserves existing structure while improving coordination among teams through shared standards and processes. This is now more critical to be able to take advantage of AI systems. The collaborative model is an alternative with many benefits for organizations whose central governance bodies – like finance, IT, data and risk – operate in silos. Complex digital and data initiatives, as well as regulatory and ethical concerns, often span multiple domains, making close coordination across departments a necessity. While the collaborative data governance model can be highly effective for complex organizations, there are situations where it may not be appropriate. ... Rather than taking a centralized approach to managing data among multiple governance domains, a federated approach allows each domain to retain its authority while adhering to shared governance standards. In other words, local control with organization-wide cohesion. ... The collaborative governance model is a framework that promotes accessible systems and processes to the organization, rather than a series of burdensome checks and red tape. In other words, under this model, data governance is viewed as an enabler, not a blocker. ... Using effective tools such as data catalogs, policy management and collaboration spaces, shared platforms streamline governance processes and enable seamless communication and cooperation between teams.China Researches Ways to Disrupt Satellite Internet
In an academic paper published in Chinese last month, researchers at two major
Chinese universities found that the communications provided by satellite
constellations could be jammed, but at great cost: To disrupt signals from the
Starlink network to a region the size of Taiwan would require 1,000 to 2,000
drones, according to a research paper cited in a report in the South China
Morning Post. ... Cyber- and electronic-warfare attacks against satellites are
being embraced because they pose less risk of collateral damage and are less
likely to escalate tensions, says Clayton Swope, deputy director for the
Aerospace Security Project at the Center for Strategic and International
Studies (CSIS), a Washington, DC-based policy think tank. ... The
constellations are resilient to disruptions. The latest research into jamming
constellation-satellite networks was published in the Chinese peer-reviewed
journal Systems Engineering and Electronics on Nov. 5 with a title that
translates to "Simulation research of distributed jammers against
mega-constellation downlink communication transmissions," the SCMP reported.
... China is not just researching ways to disrupt communications for rival
nations, but also is developing its own constellation technology to benefit
from the same distributed space networks that makes Starlink, EutelSat, and
others so reliable, according to the CSIS's Swope.The Legacy Challenge in Enterprise Data
As companies face extreme complexity with multiple legacy data warehouses and
disparate analytical data assets models owned by the line of business
analysts, the decision-making becomes challenging when moving to cloud-based
data systems for transformation and migration. Where both options are
challenging, this is not a one-size-fits-all solution, and careful
consideration is needed when making the decision, as this involves millions of
dollars and years of critical work. ... Enterprise migrations are long
journeys, not short projects. Programs typically span 18 to 24 months, cover
hundreds of terabytes of data, and touch dozens of business domains. A single
cutover is too risky, while endless pilots waste resources. Phased execution
is the only sustainable approach. High-value domains are prioritized to
demonstrate progress. Legacy and cloud often run in parallel until validation
is complete. Automated validation, DevOps pipelines, and AI-assisted SQL
conversion accelerate progress. To avoid burnout, teams are structured with a
mix of full-time employees who work closely with business users and managed
services that provide technical scale. ... Governance must be embedded from
the start. Metadata catalogs track lineage and ownership. Automated validation
ensures quality at every stage, not just at cutover. Role-based access
controls, encryption, and masking enforce compliance.
Data Stewards are sought-after individuals today. I have seen many “data
steward” job postings over the last six months and read much discussion about
the role in various periodicals and postings. I have always agreed with my
editor’s conviction that everyone is a data steward, accountable for the data
they create, manage, and use. Nevertheless, the role of data steward, as a job
and as a career, has established itself in the view of many companies as
essential to improving data governance and management. ... “Information
Stewardship” is a concept like Data Stewardship and may even predate it, based
on my brief survey of articles on these topics. Trevor gives an excellent
summary of the essence of stewardship in this context: Stewardship requires
the acceptance by the user that the information belongs to the organization as
a whole, not any one individual. The information should be shared as needed
and monitored for changes in value. ... Data Stewards “own” data, or to be
more precise, Data Stewards are responsible for the data owned by the
enterprise. If the enterprise is the old-world Lord’s Estate, then the Data
Stewardship Team consists of the people who watch over the lifeblood of the
estate, including the shepherds who make sure the data is flowing smoothly
from field to field, safe from internal and external predators, safe from
inclement weather, and safe from disease. ...
Through the Looking Glass: Data Stewards in the Realm of Gondor
Data Stewards are sought-after individuals today. I have seen many “data
steward” job postings over the last six months and read much discussion about
the role in various periodicals and postings. I have always agreed with my
editor’s conviction that everyone is a data steward, accountable for the data
they create, manage, and use. Nevertheless, the role of data steward, as a job
and as a career, has established itself in the view of many companies as
essential to improving data governance and management. ... “Information
Stewardship” is a concept like Data Stewardship and may even predate it, based
on my brief survey of articles on these topics. Trevor gives an excellent
summary of the essence of stewardship in this context: Stewardship requires
the acceptance by the user that the information belongs to the organization as
a whole, not any one individual. The information should be shared as needed
and monitored for changes in value. ... Data Stewards “own” data, or to be
more precise, Data Stewards are responsible for the data owned by the
enterprise. If the enterprise is the old-world Lord’s Estate, then the Data
Stewardship Team consists of the people who watch over the lifeblood of the
estate, including the shepherds who make sure the data is flowing smoothly
from field to field, safe from internal and external predators, safe from
inclement weather, and safe from disease. ... Scaling Cloud and Distributed Applications: Lessons and Strategies
Scaling extends beyond simply adding servers. When scaling occurs, the
fundamental question is whether the application requires scaling due to
genuine customer demand or whether upstream services experiencing queuing
issues slow system response. When threads wait for responses and cannot
execute, pressure increases on CPU and memory resources, triggering elastic
scaling even though actual demand has not grown. ... Architecture must extend
beyond documentation. Creating opinionated architecture templates assists
teams in building applications that automatically inherit architectural
standards. Applications deploy automatically using manifest-based definitions,
so that teams can focus on business functionality rather than infrastructure
tooling complexities. ... Infrastructure repaving represents a highly
effective practice of systematically rebuilding infrastructure each sprint.
Automated processes clean up running instances regularly. This approach
enhances security by eliminating configuration drift. When drift exists or
patches require application, including zero-day vulnerability fixes, all
updates can be systematically incorporated. Extended operation periods create
stale resources, performance degradation, and security vulnerabilities.
Recreating environments at defined intervals (weekly or bi-weekly) occurs
automatically.
Why Synthetic Data Will Decide Who Wins the Next Wave of AI
Why is synthetic data suddenly so important? The simple answer is that AI has begun bumping into a glass ceiling. Real-world data doesn’t extend far enough to cover all the unlikely edge cases or every scenario that we want our models to live through. Synthetic data allows teams to code in the missing parts directly. Developers construct situations as needed. ... Building synthetic data holds the key to filling the gap when the quality or volume of data needed by AI models is not good enough, but the process to create this data is not easy. Behind the scenes, there’s an entire stack working together. We are talking about simulation engines, generative models like GANs and diffusion systems, large language models (LLMs) for text-based domains. All this creates virtual worlds for training. ... The organizations most affected by the growing need for synthetic data are those that operate in high-risk areas where there is no actual data, or the act of finding it is inefficient. Think of fully autonomous vehicles that can’t simply wait for every dangerous encounter to occur in traffic. Doctors working on a cure for rare diseases but can’t call on thousands of such cases. Trading firms that can’t wait for just the right market shock for their AI models. These teams can turn synthetic data to give them a lesson from situations that are simply not possible (or practical) in real life.How ABB’s Approach to IT/OT Ensures Cyber Resilience
The convergence of IT and OT creates new vulnerabilities as previously isolated control systems now require integration with enterprise networks. ABB addresses this by embedding security architecture from the start rather than retrofitting it later. This includes proper network segmentation, validated patching protocols and granular access controls that enable safe data connectivity while protecting operational technology. ... On the security front, AI-driven monitoring can identify anomalous patterns in network traffic and system behavior that might indicate a breach attempt, spotting threats that traditional rule-based systems would miss. However, it's crucial to distinguish between embedded AI and Gen AI. Embedded AI in our products optimises processes with predictable, explainable outcomes. This same principle applies to security: AI systems that monitor for threats must be transparent in how they reach conclusions, allowing security teams to understand and validate alerts rather than trusting a black box. ... Secure data exchange protocols, multi-factor authentication on remote access points and validated update mechanisms all work together to enable the connectivity that digital twins require while maintaining security boundaries. The key is recognising that digital transformation and security are interdependent. Organisations investing millions in AI, digital twins or automation while neglecting cybersecurity are building on sand.Building an MCP server is easy, but getting it to work is a lot harder
"The true power of remote MCP is realized through centralized 'agent gateways'
where these servers are registered and managed. This model delivers the
essential guardrails that enterprises require," Shrivastava said. That said,
agent gateways do come with their own caveats. "While gateways provide security,
managing a growing ecosystem of dozens or even hundreds of registered MCP tools
introduces a new challenge: orchestration," he said. "The most scalable approach
is to add another layer of abstraction: organizing toolchains into 'topics'
based on the 'job to be done.'" ... "When a large language model is granted
access to multiple external tools via the protocol, there is a significant risk
that it may choose the wrong tool, misuse the correct one, or become confused
and produce nonsensical or irrelevant outputs, whether through classic
hallucinations or incorrect tool use," he explained. ... MCP's scaling limits
also present a huge obstacle. The scaling limits exist "because the protocol was
never designed to coordinate large, distributed networks of agents," said James
Urquhart, field CTO and technology evangelist at Kamiwaza AI, a provider of
products that orchestrate and deploy autonomous AI agents. MCP works well in
small, controlled environments, but "it assumes instant responses between
agents," he said -- an unrealistic expectation once systems grow and "multiple
agents compete for processing time, memory or bandwidth." The quantum clock is ticking and businesses are still stuck in prep mode
The report highlights one of the toughest challenges. Eighty one percent of
respondents said their crypto libraries and hardware security modules are not
prepared for post quantum integration. Many use legacy systems that depend on
protocols designed long before quantum threats were taken seriously.
Retrofitting these systems is not a simple upgrade. It requires changes to how
keys are generated, stored and exchanged. Skills shortages compound the problem.
Many security teams lack experience in testing or deploying post quantum
algorithms. Vendor dependence also slows progress because businesses often
cannot move forward until external suppliers update their own tooling. ...
Nearly every organization surveyed plans to allocate budget toward post quantum
projects within the next two years. Most expect to spend between six and ten
percent of their cybersecurity budgets on research, tooling or deployment.
Spending levels differ by region. More than half of US organizations plan to
invest at least eleven percent, far higher than the UK and Germany. ...
Contractual requirements from customers and partners are seen as the strongest
motivator for adoption. Industry standards rank near the top of the list across
most sectors. Many respondents also pointed to upcoming regulations and mandates
as drivers. Security incidents ranked surprisingly low in the US, suggesting
that market and policy signals hold more influence than hypothetical attack
scenarios.






























