Fortifying Cyber Resilience with Trusted Data Integrity
While it is tempting to put all of the focus on keeping the bad guys out,
there is an important truth to remember: Cybercriminals are persistent and
eventually, they find a way in. The key is not to try and build an
impenetrable wall, because that wall does not exist. Instead, organizations
need to have a defense strategy at the data level. By monitoring data for
signs of ransomware behavior, the spread of the attack can be slowed or even
stopped. It includes analyzing data and watching for patterns that indicate a
ransomware attack is in progress. When caught early, organizations have the
power to stop the attack before it causes widespread damage. Once an attack
has been identified, it is time to execute the curated recovery plan. That
means not just restoring everything in one action but instead selectively
recovering the clean data and leaving the corrupted files behind. ... Trusted
data integrity offers a new way forward. By ensuring that data remains clean
and intact, detecting corruption early, and enabling a faster, more
intelligent recovery, data integrity is the key to reducing the damage and
cost of a ransomware attack. In the end, it’s all about being prepared.
Regulating AI Catastophic Risk Isn't Easy
Catastrophic risks are those that cause a failure of the system, said Ram
Bala, associate professor of business analytics at Santa Clara University's
Leavey School of Business. Risks could range from endangering all of humanity
to more contained impact, such as disruptions affecting only enterprise
customers of AI products, he told Information Security Media Group. Deming
Chen, professor of electrical and computer engineering at the University of
Illinois, said that if AI were to develop a form of self-interest or
self-awareness, the consequences could be dire. "If an AI system were to start
asking, 'What's in it for me?' when given tasks, the results could be severe,"
he said. Unchecked self-awareness might drive AI systems to manipulate their
abilities, leading to disorder, and potentially catastrophic outcomes. Bala
said that most experts see these risks as "far-fetched," since AI systems
currently lack sentience or intent, and likely will for the foreseeable
future. But some form of catastrophic risk might already be here. Eric
Wengrowski, CEO of Steg.AI, said that AI's "widespread societal or economic
harm" is evident in disinformation campaigns through deepfakes and digital
content manipulation.
The Importance of Lakehouse Formats in Data Streaming Infrastructure
Most data scientists spend the majority of their time updating those data in a
single format. However, when your streaming infrastructure has data processing
capabilities, you can update the formats of that data at the ingestion layer
and land the data in the standardized format you want to analyze. Streaming
infrastructure should also scale seamlessly like Lakehouse architectures,
allowing organizations to add storage and compute resources as needed. This
scalability ensures that the system can handle growing data volumes and
increasing analytical demands without major overhauls or disruptions to
existing workflows. ... As data continues to play an increasingly central role
in business operations and decision-making, the importance of efficient,
flexible, and scalable data architectures will only grow. The integration of
lakehouse formats with streaming infrastructure represents a significant step
forward in meeting these evolving needs. Organizations that embrace this
unified approach to data management will be better positioned to derive value
from their data assets, respond quickly to changing market conditions, and
drive innovation through advanced analytics and AI applications.
Open source culture: 9 core principles and values
Whether you’re experienced or just starting out, your contributions are
valued in open source communities. This shared responsibility helps keep the
community strong and makes sure the projects run smoothly. When people come
together to contribute and work toward shared goals, it fuels creativity and
drives productivity. ... While the idea of meritocracy is incredibly
appealing, there are still some challenges that come along with it. In
reality, the world is not fair and people do not get the same opportunities
and resources to express their ideas. Many people face challenges such as
lack of resources or societal biases that often go unacknowledged in
"meritocratic" situations. Essentially, open source communities suffer from
the same biases as any other communities. For meritocracy to truly work,
open source communities need to actively and continuously work to make sure
everyone is included and has a fair and equal opportunity to contribute. ...
Open source is all about how everyone gets a chance to make an impact and
difference. As mentioned previously, titles and positions don’t define the
value of your work and ideas—what truly matters is the expertise, work and
creativity you bring to the table.
How to Ensure Cloud Native Architectures Are Resilient and Secure
Microservices offer flexibility and faster updates but also introduce
complexity — and more risk. In this case, the company had split its platform
into dozens of microservices, handling everything from user authentication
to transaction processing. While this made scaling more accessible, it also
increased the potential for security vulnerabilities. With so many moving
parts, monitoring API traffic became a significant challenge, and critical
vulnerabilities went unnoticed. Without proper oversight, these blind spots
could quickly become significant entry points for attackers. Unmanaged APIs
could create serious vulnerabilities in the future. If these gaps aren’t
addressed, companies could face major threats within a few years. ... As
companies increasingly embrace cloud native technologies, the rush to
prioritize agility and scalability often leaves security as an afterthought.
But that trade-off isn’t sustainable. By 2025, unmanaged APIs could expose
organizations to significant breaches unless proper controls are implemented
today. ... As companies increasingly embrace cloud native technologies, the
rush to prioritize agility and scalability often leaves security as an
afterthought. But that trade-off isn’t sustainable. By 2025, unmanaged APIs
could expose organizations to significant breaches unless proper controls
are implemented today.
Focus on Tech Evolution, Not on Tech Debt
Tech Evolution represents a mindset shift. Instead of simply repairing the
system, Tech Evolution emphasises continuous improvement, where the team
proactively advances the system to stay ahead of future requirements. It’s a
strategic, long-term investment in the growth and adaptability of the
technology stack. Tech Evolution is about future-proofing your platform.
Rather than focusing on past mistakes (tech debt), the focus shifts toward
how the technology can evolve to accommodate new trends, user demands, and
business goals. ... One way to action Tech Evolution is to dedicate time
specifically for innovation. Development teams can use innovation days,
hackathons, or R&D-focused sprints to explore new ideas, tools, and
frameworks. This builds a culture of experimentation and continuous
learning, allowing the team to identify future opportunities for evolving
the tech stack. ... Fostering a culture of continuous learning is essential
for Tech Evolution. Offering training programs, hosting workshops, and
encouraging attendance at conferences ensures your team stays informed about
emerging technologies and best practices.
Singapore’s Technology Empowered AML Framework
Developed by the Monetary Authority of Singapore (MAS) in collaboration withhttps://cdn.opengovasia.com/wp-content/uploads/2024/10/Article_08-Oct-2024_1-Sing-1270-1.jpg
six major banks, COSMIC is a centralised digital platform for global
information sharing among financial institutions to combat money laundering,
terrorism financing, and proliferation financing, enhancing defences against
illicit activities. By pooling insights from different financial entities,
COSMIC enhances Singapore’s ability to detect and disrupt money laundering
schemes early, particularly when transactions cross international
borders(IMC Report). Another significant collaboration is the Anti-Money
Laundering/Countering the Financing of Terrorism Industry Partnership
(ACIP). This partnership between MAS, the Commercial Affairs Department
(CAD) of the Singapore Police Force, and private-sector financial
institutions allows for the sharing of best practices, the issuance of
advisories, and the development of enhanced AML measures. ... Another
crucial aspect of Singapore’s AML strategy is the AML Case Coordination and
Collaboration Network (AC3N). This new framework builds on the Inter-Agency
Suspicious Transaction Reports Analytics (ISTRA) task force to improve
coordination between all relevant agencies.
Future-proofing Your Data Strategy with a Multi-tech Platform
Traditional approaches that were powered by a single tool or two, like
Apache Cassandra or Apache Kafka, were once the way to proceed. However, now
used alone, these tools are proving insufficient to meet the demands of
modern data ecosystems. The challenges presented by today’s distributed,
real-time, and unstructured data have made it clear that businesses need a
new strategy. Increasingly, that strategy involves the use of a multi-tech
platform. ... Implementing a multi-tech platform can be complex, especially
considering the need to manage integrations, scalability, security, and
reliability across multiple technologies. Many organizations simply do not
have the time or expertise in the different technologies to pull this off.
Increasingly, organizations are partnering with a technology provider that
has the expertise in scaling traditional open-source solutions and the
real-world knowledge in integrating the different solutions. That’s where
Instaclustr by NetApp comes in. Instaclustr offers a fully managed platform
that brings together a comprehensive suite of open-source data
technologies.
Strong Basics: The Building Blocks of Software Engineering
It is alarmingly easy to assume a “truth” on faith when, in reality, it is
open to debate. Effective problem-solving starts by examining assumptions
because the assumptions that survive your scrutiny will dictate which
approaches remain viable. If you didn’t know your intended plan rested on an
unfounded or invalid assumption, imagine how disastrously it would be to
proceed anyway. Why take that gamble? ... Test everything you design or
build. It is astounding how often testing gets skipped. A recent study
showed that just under half of the time, information security professionals
don’t audit major updates to their applications. It’s tempting to look at
your application on paper and reason that it should be fine. But if
everything worked like it did on paper, testing would never find any issues
— yet so often it does. The whole point of testing is to discover what you
didn’t anticipate. Because no one can foresee everything, the only way to
catch what you didn’t is to test. ... companies continue to squeeze out more
productivity from their workforce by adopting the cutting-edge technology of
the day, generative AI being merely the latest iteration of this
trend.
The resurgence of DCIM: Navigating the future of data center management
A significant factor behind the resurgence of DCIM is the exponential growth
in data generation and the requirement for more infrastructure capacity.
Businesses, consumers, and devices are producing data at unprecedented
rates, driven by trends such as cloud computing, digital transformation, and
the Internet of Things (IoT). This influx of data has created a critical
demand for advanced tools that can offer comprehensive visibility into
resources and infrastructure. Organizations are increasingly seeking DCIM
solutions that enable them to efficiently scale their data centers to handle
this growth while maintaining optimal performance. ... Modern DCIM
solutions, such as RiT Tech’s XpedITe, also leverage AI and machine learning
to provide predictive maintenance capabilities. By analyzing historical data
and identifying patterns, it will predict when equipment is likely to fail
and automatically schedule maintenance ahead of any failure as well as
providing automation of routine tasks such as resource allocations. As data
centers continue to grow in size and complexity, effective capacity planning
becomes increasingly important. DCIM solutions provide the tools needed to
plan and optimize capacity, ensuring that data center resources are used
efficiently and that there is sufficient capacity to meet future demand.
Quote for the day:
“Too many of us are not living our dreams because we are living our
fears.” -- Les Brown