Quote for the day:
"Limitations live only in our minds. But
if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti

When it comes to quantum readiness, businesses currently have two options:
Quantum key distribution (QKD) and post quantum cryptography (PQC). Of these,
PQC reigns supreme. Here’s why. On the one hand, you have QKD which leverages
principles of quantum physics, such as superposition, to securely distribute
encryption keys. Although great in theory, it needs extensive new
infrastructure, including bespoke networks and highly specialised hardware. More
importantly, it also lacks authentication capabilities, severely limiting its
practical utility. PQC, on the other hand, comprises classical cryptographic
algorithms specifically designed to withstand quantum attacks. It can be
integrated into existing digital infrastructures with minimal
disruption. ... Imagine installing new quantum-safe algorithms prematurely,
only to discover later they’re vulnerable, incompatible with emerging standards,
or impractical at scale. This could have the opposite effect and could
inadvertently increase attack surface and bring severe operational headaches,
ironically becoming less secure. But delaying migration for too long also poses
serious risks. Malicious actors could be already harvesting encrypted data,
planning to decrypt it when quantum technology matures – so businesses
protecting sensitive data such as financial records, personal details,
intellectual property cannot afford indefinite delays.

The regulatory framework for digital sovereignty is a national priority. The EU
has set the pace with GDPR and GAIA-X. It prioritizes data residency and local
infrastructure. China's cybersecurity law and personal information protection
law enforce strict data localization. India's DPDP Act mandates local storage
for sensitive data, aligning with its digital self-reliance vision through
platforms such as Aadhaar. Russia's federal law No. 242-FZ requires citizen data
to stay within the country for the sake of national security. Australia's
privacy act focuses on data privacy, especially for health records, and Canada's
PIPEDA encourages local storage for government data. Saudi Arabia's personal
data protection law enforces localization for sensitive sectors, and Indonesia's
personal data protection law covers all citizen-centric data. Singapore's PDPA
balances privacy with global data flows, and Brazil's LGPD, mirroring the EU's
GDPR, mandates the protection of privacy and fundamental rights of its citizens.
... Tech companies have little option but to comply with the growing demands of
digital sovereignty. For example, Amazon Web Services has a digital sovereignty
pledge, committing to "a comprehensive set of sovereignty controls and features
in the cloud" without compromising performance.

Agentic AI governance is a framework that ensures artificial intelligence
systems operate within defined ethical, legal, and technical boundaries. This
governance is crucial for maintaining trust, compliance, and operational
efficiency, especially in industries such as Banking, Financial Services,
Insurance, and Capital Markets. In tandem with robust data quality management,
Agentic AI governance can substantially enhance the reliability and
effectiveness of AI-driven solutions. ... In industries such as Banking,
Financial Services, Insurance, and Capital Markets, the importance of Agentic
AI governance cannot be overstated. These sectors deal with vast amounts of
sensitive data and require high levels of accuracy, security, and compliance.
Here’s why Agentic AI governance is essential: Enhanced Trust: Proper
governance fosters trust among stakeholders by ensuring AI systems are
transparent, fair, and reliable. Regulatory Compliance: Adherence to legal and
regulatory requirements helps avoid penalties and safeguard against legal
risks. Operational Efficiency: By mitigating risks and ensuring accuracy, AI
governance enhances overall operational efficiency and decision-making.
Protection of Sensitive Data: Robust governance frameworks protect sensitive
financial data from breaches and misuse, ensuring privacy and
security.

Keeping the dimensions separate from facts makes it easier for analysts to
slice-and-dice and filter data to align with the relevant context underlying a
business problem. Data modelers organize these facts and descriptive dimensions
into separate tables within the data warehouse, aligning them with the different
subject areas and business processes. ... Dimensional modeling provides a basis
for meaningful analytics gathered from a data warehouse for many reasons. Its
processes lead to standardizing dimensions through presenting the data blueprint
intuitively. Additionally, dimensional data modeling proves to be flexible as
business needs evolve. The data warehouse updates technology according to the
concept of slowly changing dimensions (SCD) as business contexts emerge. ...
Alignment in the design requires these processes, and data governance plays an
integral role in getting there. Once the organization is on the same page about
the dimensional model’s design, it chooses the best kind of implementation.
Implementation choices include the star or snowflake schema around a fact. When
organizations have multiple facts and dimensions, they use a cube. A dimensional
model defines how technology needs to build a data warehouse architecture or one
of its components using good design and implementation.

The latest research, published this week by application security vendor OX
Security, reveals the hidden dangers of verified IDE extensions. While IDEs
provide an array of development tools and features, there are a variety of
third-party extensions that offer additional capabilities and are available in
both official marketplaces and external websites. ... But OX researchers
realized they could add functionality to verified extensions after the fact and
still maintain the checkmark icon. After analyzing traffic for Visual Studio
Code, the researchers found a server request to the marketplace that determines
whether the extension is verified; they discovered they could modify the values
featured in the server request and maintain the verification status even after
creating malicious versions of the approved extensions. ... Using this attack
technique, a threat actor could inject malicious code into verified and
seemingly safe extensions that would maintain their verified status. "This can
result in arbitrary code execution on developers' workstations without their
knowledge, as the extension appears trusted," Siman-Tov Bustan and Zadok wrote.
"Therefore, relying solely on the verified symbol of extensions is inadvisable."
... "It only takes one developer to download one of these extensions," he says.
"And we're not talking about lateral movement. ..."
A key driver behind the business case for agentic AI in the SOC is the acute
shortage of skilled security analysts. The global cybersecurity workforce gap is
now estimated at 4 million professionals, but the real bottleneck for most
organizations is the scarcity of experienced analysts with the expertise to
triage, investigate, and respond to modern threats. One ISC2 survey report from
2024 shows that 60% of organizations worldwide reported staff shortages
significantly impacting their ability to secure the organizations, with another
report from the World Economic Forum showing that just 15% of organizations
believe they have the right people with the right skills to properly respond to
a cybersecurity incident. Existing teams are stretched thin, often forced to
prioritize which alerts to investigate and which to leave unaddressed. As
previously mentioned, the flood of false positives in most SOCs means that even
the most experienced analysts are too distracted by noise, increasing exposure
to business-impacting incidents. Given these realities, simply adding more
headcount is neither feasible nor sustainable. Instead, organizations must focus
on maximizing the impact of their existing skilled staff. The AI SOC Analyst
addresses this by automating routine Tier 1 tasks, filtering out noise, and
surfacing the alerts that truly require human judgment.

Microservices will reduce dependencies, because it forces you to serialize your
types into generic graph objects (read; JSON or XML or something similar). This
implies that you can just transform your classes into a generic graph object at
its interface edges, and accomplish the exact same thing. ... There are valid
arguments for using message brokers, and there are valid arguments for
decoupling dependencies. There are even valid points of scaling out horizontally
by segregating functionality on to different servers. But if your argument in
favor of using microservices is "because it eliminates dependencies," you're
either crazy, corrupt through to the bone, or you have absolutely no idea what
you're talking about (make your pick!) Because you can easily achieve the same
amount of decoupling using Active Events and Slots, combined with a generic
graph object, in-process, and it will execute 2 billion times faster in
production than your "microservice solution" ... "Microservice Architecture" and
"Service Oriented Architecture" (SOA) have probably caused more harm to our
industry than the financial crisis in 2008 caused to our economy. And the funny
thing is, the damage is ongoing because of people repeating mindless
superstitious belief systems as if they were the truth.

Direct-to-chip liquid cooling delivers impressive efficiency but doesn’t manage
the entire thermal load. That’s why hybrid systems that combine liquid and
traditional air cooling are increasingly popular. These systems offer the
ability to fine-tune energy use, reduce reliance on mechanical cooling, and
optimize server performance. HiRef offers advanced cooling distribution units
(CDUs) that integrate liquid-cooled servers with heat exchangers and support
infrastructure like dry coolers and dedicated high-temperature chillers. This
integration ensures seamless heat management regardless of local climate or load
fluctuations. ... With liquid cooling systems capable of operating at higher
temperatures, facilities can increasingly rely on external conditions for
passive cooling. This shift not only reduces electricity usage, but also allows
for significant operational cost savings over time. But this sustainable future
also depends on regulatory compliance, particularly in light of the recently
updated F-Gas Regulation, which took effect in March 2024. The EU regulation
aims to reduce emissions of fluorinated greenhouse gases to net-zero by 2050 by
phasing out harmful high-GWP refrigerants like HFCs. “The F-Gas regulation isn’t
directly tailored to the data center sector,” explains Poletto.

Threat intelligence firm Censys has scanned the internet twice a month for the
last six months, looking for a representative sample composed of four widely
used types of ICS devices publicly exposed to the internet. Overall exposure
slightly increased from January through June, the firm said Monday. One of the
devices Censys scanned for is programmable logic controllers made by an
Israel-based Unitronics. The firm's Vision-series devices get used in numerous
industries, including the water and wastewater sector. Researchers also counted
publicly exposed devices built by Israel-based Orpak - a subsidiary of Gilbarco
Veeder-Root - that run SiteOmat fuel station automation software. It also looked
for devices made by Red Lion that are widely deployed for factory and process
automation, as well as in oil and gas environments. It additionally probed for
instances of a facilities automation software framework known as Niagara, made
by Tridium. ... Report author Emily Austin, principal security researcher at
Censys, said some fluctuation over time isn't unusual, given how "services on
the internet are often ephemeral by nature." The greatest number of publicly
exposed systems were in the United States, except for Unitronics, which are also
widely used in Australia.
Security must be embedded early and consistently throughout the development
lifecycle, and that requires cross-functional alignment and leadership support.
Without an understanding of how regulations translate into practical, actionable
security controls, CISOs can struggle to achieve traction within fast-paced
development environments. ... Security objectives should be mapped to these
respective cycles—addressing tactical issues like vulnerability remediation
during sprints, while using PI planning cycles to address larger technical and
security debt. It’s also critical to position security as an enabler of business
continuity and trust, rather than a blocker. Embedding security into existing
workflows rather than bolting it on later builds goodwill and ensures more
sustainable adoption. ... The key is intentional consolidation. We prioritize
tools that serve multiple use cases and are extensible across both DevOps and
security functions. For example, choosing solutions that can support
infrastructure-as-code security scanning, cloud posture management, and
application vulnerability detection within the same ecosystem. Standardizing
tools across development and operations not only reduces overhead but also makes
it easier to train teams, integrate workflows, and gain unified visibility into
risk.
No comments:
Post a Comment