Why Staging Is a Bottleneck for Microservice Testing
Multiple teams often wait for their turn to test features in staging. This
creates bottlenecks. The pressure on teams to share resources can severely
delay releases, as they fight for access to the staging environment.
Developers who attempt to spin up the entire stack on their local machines for
testing run into similar issues. As distributed systems engineer Cindy
Sridharan notes, “I now believe trying to spin up the full stack on developer
laptops is fundamentally the wrong mindset to begin with, be it at startups or
at bigger companies.” The complexities of microservices make it impractical to
replicate entire environments locally, just as it’s difficult to maintain
shared staging environments at scale. ... From a release process perspective,
the delays caused by a fragile staging environment lead to slower shipping of
features and patches. When teams spend more time fixing staging issues than
building new features, product development slows down. In fast-moving
industries, this can be a major competitive disadvantage. If your release
process is painful, you ship less often, and the cost of mistakes in
production is higher.
Misconfiguration Madness: Thwarting Common Vulnerabilities in the Financial Sector
Financial institutions require legions of skilled security personnel in order
to overcome the many challenges facing their industry. Developers are an
especially important part of that elite cadre of defenders for a variety of
reasons. First and foremost, security-aware developers can write secure code
for new applications, which can thwart attackers by denying them a foothold in
the first place. If there are no vulnerabilities to exploit, an attacker won't
be able to operate, at least not very easily. Developers with the right
training can also help to support both modern and legacy applications by
examining the existing code that makes up some of the primary vectors used to
attack financial institutions. That includes cloud misconfigurations, lax API
security, and the many legacy bugs found in applications written in COBOL and
other aging computer languages. However, the task of nurturing and maintaining
security-aware developers in the financial sector won’t happen on its own. It
requires precise, immersive training programs that are highly customizable and
matched to the specific complex environment that a financial services
institution is using.
3 things to get right with data management for gen AI projects
The first is a series of processes — collecting, filtering, and categorizing
data — that may take several months for KM or RAG models. Structured data is
relatively easy, but the unstructured data, while much more difficult to
categorize, is the most valuable. “You need to know what the data is, because
it’s only after you define it and put it in a taxonomy that you can do
anything with it,” says Shannon. ... “We started with generic AI usage
guidelines, just to make sure we had some guardrails around our experiments,”
she says. “We’ve been doing data governance for a long time, but when you
start talking about automated data pipelines, it quickly becomes clear you
need to rethink the older models of data governance that were built more
around structured data.” Compliance is another important area of focus. As a
global enterprise thinking about scaling some of their AI projects, Harvard
keeps an eye on evolving regulatory environments in different parts of the
world. It has an active working group dedicated to following and understanding
the EU AI Act, and before their use cases go into production, they run through
a process to make sure all compliance obligations are satisfied.
Fundamentals of Data Preparation
Data preparation is intended to improve the quality of the information that ML
and other information systems use as the foundation of their analyses and
predictions. Higher-quality data leads to greater accuracy in the analyses the
systems generate in support of business decision-makers. This is the textbook
explanation of the link between data preparation and business outcomes, but in
practice, the connection is less linear. ... Careful data preparation adds
value to the data itself, as well as to the information systems that rely on
the data. It goes beyond checking for accuracy and relevance and removing
errors and extraneous elements. The data-prep stage gives organizations the
opportunity to supplement the information by adding geolocation, sentiment
analysis, topic modeling, and other aspects. Building an effective data
preparation pipeline begins long before any data has been collected. As with
most projects, the preparation starts at the end: identifying the
organization’s goals and objectives, and determining the data and tools
required to achieve those goals. ... Appropriate data preparation is the key
to the successful development and implementation of AI systems in large part
because AI amplifies existing data quality problems.
How to Rein in Cybersecurity Tool Sprawl
Security tool sprawl happens for many different reasons. Adding new tools and
new vendors as new problems arise without evaluating the tools already in
place is often how sprawl starts. The sheer glut of tools available in the
market can make it easy for security teams to embrace the latest and greatest
solutions. “[CISOs] look for the newest, the latest and the greatest. They're
the first adopter type,” says Reiter. A lack of communication between
departments and teams in an enterprise can also contribute. “There's the
challenge of teams not necessarily knowing their day-to-day functions of other
team,” says Mar-Tang. Security leaders can start to wrap their heads around
the problem of sprawl by running an audit of the security tools in place.
Which teams use which tools? How often are the tools used? How many vendors
supply those tools? What are the lengths of the vendor contracts? Breaking
down communication barriers within an enterprise will be a necessary part of
answering questions like these. “Talk to the … security and IT risk side of
your house, the people who clean up the mess. You have an advocate and a
partner to be able to find out where you have holes and where you have
sprawl,” Kris Bondi, CEO and co-founder at endpoint security company Mimoto,
recommends.
The Promise and Perils of Generative AI in Software Testing
The journey from human automation tester to AI test automation engineer is
transformative. Traditionally, transitioning to test automation required
significant time and resources, including learning to code and understanding
automation frameworks. AI removes these barriers and accelerates development
cycles, dramatically reducing time-to-market and improving accuracy, all while
decreasing the level of admin tasks for software testers. AI-powered
tools can interpret test scenarios written in plain language, automatically
generate the necessary code for test automation, and execute tests across
various platforms and languages. This dramatically reduces the enablement
time, allowing QA professionals to focus on strategic tasks instead of coding
complexities. ... As GenAI becomes increasingly integrated into software
development life cycles, understanding its capabilities and limitations is
paramount. By effectively managing these dynamics, development teams can
leverage GenAI’s potential to enhance their testing practices while ensuring
the integrity of their software products.
Near-'perfctl' Fileless Malware Targets Millions of Linux Servers
The malware looks for vulnerabilities and misconfigurations to exploit in
order to gain initial access. To date, Aqua Nautilus reports, the malware has
likely targeted millions of Linux servers, and compromised thousands. Any
Linux server connected to the Internet is in its sights, so any server that
hasn't already encountered perfctl is at risk. ... By tracking its infections,
researchers identified three Web servers belonging to the threat actor: two
that were previously compromised in prior attacks, and a third likely set up
and owned by the threat actor. One of the compromised servers was used as the
primary base for malware deployment. ... To further hide its presence and
malicious activities from security software and researcher scrutiny, it
deploys a few Linux utilities repurposed into user-level rootkits, as well as
one kernel-level rootkit. The kernel rootkit is especially powerful, hooking
into various system functions to modify their functionality, effectively
manipulating network traffic, undermining Pluggable Authentication Modules
(PAM), establishing persistence even after primary payloads are detected and
removed, or stealthily exfiltrating data.
Three hard truths hindering cloud-native detection and response
Most SOC teams either lack the proper tooling or have so many cloud security
point tools that the management burden is untenable. Cloud attacks happen way
too fast for SOC teams to flip from one dashboard to another to determine if
an application anomaly has implications at the infrastructure level. Given the
interconnectedness of cloud environments and the accelerated pace at which
cloud attacks unfold, if SOC teams can’t see everything in one place, they’ll
never be able to connect the dots in time to respond. More importantly,
because everything in the cloud happens at warp speed, we humans need to act
faster, which can be nerve wracking and increase the chance of accidentally
breaking something. While the latter is a legitimate concern, if we want to
stay ahead of our adversaries, we need to get comfortable with the accelerated
pace of the cloud. While there are no quick fixes to these problems, the
situation is far from hopeless. Cloud security teams are getting smarter and
more experienced, and cloud security toolsets are maturing in lockstep with
cloud adoption. And I, like many in the security community, am optimistic that
AI can help deal with some of these challenges.
How to Fight ‘Technostress’ at Work
Digital stressors don’t occur in isolation, according to the researchers,
which necessitates a multifaceted approach. “To address the problem, you can’t
just address the overload and invasion,” Thatcher said. “You have to be more
strategic.” “Let’s say I’m a manager, and I implement a policy that says no
email on weekends because everybody’s stressed out,” Thatcher said. “But
everyone stays stressed out. That’s because I may have gotten rid of
techno-invasion—that feeling that work is intruding on my life—but on Monday,
when I open my email, I still feel really overloaded because there are 400
emails.” It’s crucial for managers to assess the various digital stressors
affecting their employees and then target them as a combination, according to
the researchers. That means to address the above problem, Thatcher said, “you
can’t just address invasion. You can’t just address overload. You have to
address them together,” he said. ... Another tool for managers is empowering
employees, according to the study. “As a manager, it may feel really dangerous
to say, ‘You can structure when and where and how you do work.’
Fix for BGP routing insecurity ‘plagued by software vulnerabilities’ of its own, researchers find
Under BGP, there is no way to authenticate routing changes. The arrival of
RPIK just over a decade ago was intended to fix that, using a digital record
called a Route Origin Authorization (ROA) that identifies an ISP as having
authority over specific IP infrastructure. Route origin validation (ROV) is
the process a router undergoes to check that an advertised route is authorized
by the correct ROA certificate. In principle, this makes it impossible for a
rogue router to maliciously claim a route it does not have any right to. RPKI
is the public key infrastructure that glues this all together, security-wise.
The catch is that, for this system to work, RPIK needs a lot more ISPs to
adopt it, something which until recently has happened only very slowly. ...
“Since all popular RPKI software implementations are open source and accept
code contributions by the community, the threat of intentional backdoors is
substantial in the context of RPKI,” they explained. A software supply chain
that creates such vital software enabling internet routing should be subject
to a greater degree of testing and validation, they argue.
Quote for the day:
"You may have to fight a battle more
than once to win it." -- Margaret Thatcher
No comments:
Post a Comment