AI agents may lead the next wave of cyberattacks
“Many organizations run a pen test on maybe an annual basis, but a lot of things change within an application or website in a year,” he said. “Traditional cybersecurity organizations within companies have not been built for constant self-penetration testing.” Stytch is attempting to improve upon what McGinley-Stempel said are weaknesses in popular authentication schemes of such as the Completely Automated Public Turing test to tell Computers and Humans Apart, or captcha, a type of challenge-response test used to determine whether a user interacting with a system is a human or an bot. Captcha codes may require users to decipher scrambled letters or count the number of traffic lights in an image. ... “If you’re just going to fight machine learning models on the attacking side with ML models on the defensive side, you’re going to get into some bad probabilistic situations that are not going to necessarily be effective,” he said. Probabilistic security provides protections based on probabilities but assumes that absolute security can’t be guaranteed. Stytch is working on deterministic approaches such as fingerprinting, which gathers detailed information about a device or software based on known characteristics and can provide a higher level of certainty that the user is who they say they are.
How businesses can ensure cloud uptime over the holidays
To ensure uptime during the holidays, best practice should include conducting
pre-holiday stress tests to identify system vulnerabilities and configure
autoscaling to handle demand surges. Experts also recommend simulating failures
through chaos engineering to expose weaknesses. Redundancy across regions or
availability zones is essential, as is a well-documented incident response plan
– with clear escalation paths – “as this allows a team to address problems
quickly even with reduced staffing,” says VimalRaj Sampathkumar, technical head
– UKI at software company ManageEngine. It’s all about understanding the
business requirements and what your demand is going to look like, says Luan
Hughes, chief information officer (CIO) at tech provider Telent, as this will
vary from industry to industry. “When we talk about preparedness, we talk a lot
about critical incident management and what happens when big things occur, but I
think you need to have an appreciation of what your triggers are,” she says. ...
It’s also important to focus on your people as much as your systems, she adds,
noting that it’s imperative to understand your management processes,
out-of-hours and on-call rota and how you action support if problems do
arise.
Tech worker movements grow as threats of RTO, AI loom
While layoffs likely remain the most extreme threat to tech workers broadly, a
return-to-office (RTO) mandate can be just as jarring for remote tech workers
who are either unable to comply or else unwilling to give up the better
work-life balance that comes with no commute. Advocates told Ars that RTO
policies have pushed workers to join movements, while limited research
suggests that companies risk losing top talents by implementing RTO policies.
... Other companies mandating RTO faced similar backlash from workers, who
continued to question the logic driving the decision. One February study
showed that RTO mandates don't make companies any more valuable but do make
workers more miserable. And last month, Brian Elliott, an executive advisor
who wrote a book about the benefits of flexible teams, noted that only one in
three executives thinks RTO had "even a slight positive impact on
productivity." But not every company drew a hard line the way that Amazon did.
For example, Dell gave workers a choice to remain remote and accept they can
never be eligible for promotions, or mark themselves as hybrid. Workers who
refused the RTO said they valued their free time and admitted to looking for other job opportunities.
Navigating the cloud and AI landscape with a practical approach
When it comes to AI or genAI, just like everyone else, we started with use cases that we can control. These include content generation, sentiment analysis and related areas. As we explored these use cases and gained understanding, we started to dabble in other areas. For example, we have an exciting use case for cleaning up our data that leverages genAI as well as non-generative machine learning to help us identify inaccurate product descriptions or incorrect classifications and then clean them up and regenerate accurate, standardized descriptions. ... While this might be driving internal productivity, you also must think of it this way: As a distributor, at any one time, we deal with millions of parts. Our supplier partners keep sending us their price books, spec sheets and product information every quarter. So, having a group of people trying to go through all that data to find inaccuracies is a daunting, almost impossible, task. But with AI and genAI capabilities, we can clean up any inaccuracies far more quickly than humans could. Sometimes within as little as 24 hours. That helps us improve our ability to convert and drive business through an improved experience for our customers.
When the System Fights Back: A Journey into Chaos Engineering
Enter chaos engineering — the art of deliberately creating disaster to build
stronger systems. I’d read about Netflix’s Chaos Monkey, a tool designed to
randomly kill servers in production, and I couldn’t help but admire the
audacity. What if we could turn our system into a fighter — one that could
take a punch and still come out swinging? ... Chaos engineering taught me more
than I expected. It’s not just a technical exercise; it’s a mindset. It’s
about questioning assumptions, confronting fears, and embracing failure as a
teacher. We integrated chaos experiments into our CI/CD pipeline, turning them
into regular tests. Post-mortems became celebrations of what we’d learned,
rather than finger-pointing sessions. And our systems? Stronger than ever. But
chaos engineering isn’t just about the tech. It’s about the culture you build
around it. It’s about teaching your team to think like detectives, to dig into
logs and metrics with curiosity instead of dread. It’s about laughing at the
absurdity of breaking things on purpose and marveling at how much you learn
when you do. So here’s my challenge to you: embrace the chaos. Whether you’re
running a small app or a massive platform, the principles hold true.
Enhancing Your Company’s DevEx With CI/CD Strategies
CI/CD pipelines are key to an engineering organization’s efficiency, used by
up to 75% of software companies with developers interacting with them daily.
However, these CI/CD pipelines are often far from being the ideal tool to work
with. A recent survey found that only 14% of practitioners go from code to
production in less than a day when high-performing teams should be able to
deploy multiple times a day. ... Merging, building, deploying and running are
all classic steps of a CI/CD pipeline, often handled by multiple tools. Some
organizations have SREs that handle these functions, but not all developers
are that lucky! In that case, if a developer wants to push code where a
pipeline isn’t set up — which can be quite recurring with the rise of
microservices — they must assemble those rarely-used tools. However, this will
disturb the flow state you wish your developers to remain in. ...
Troubleshooting issues within a CI/CD pipeline can be challenging for
developers due to the need for more visibility and information. These
processes often operate as black boxes, running on servers that developers may
not have direct access to with software that is foreign to developers.
Consequently, developers frequently rely on DevOps engineers — often
understaffed — to diagnose problems, leading to slow feedback loops.
How to Architect Software for a Greener Future
Code efficiency is something that the platforms and the languages should make
easy for us. They should do the work, because that's their area of expertise,
and we should just write code. Yes, of course, write efficient code, but it's
not a silver bullet. What about data center efficiency, then? Surely, if we
just made our data center hyper efficient, we wouldn't have to worry. We could
just leave this problem to someone else. ... It requires you to do some
thinking. It also requires you to orchestrate this in some type of way. One
way to do this is autoscaling. Let's talk about autoscaling. We have the same
chart here but we have added demand. Autoscaling is the simple concept that
when you have more demand, you use more resources and you have a bigger box,
virtual machine, for example. The key here is very easy to do the first thing.
We like to do this, "I think demand is going to go up, provision more, have
more space. Yes, I feel safe. I feel secure now". Going the other way is a
little scarier. It's actually just as important as compared to sustainability.
Otherwise, we end up in the first scenario where we are incorrectly sized for
our resource use. Of course, this is a good tool to use if you have a
variability in demand.
Tech Trends 2025 shines a light on the automation paradox – R&D World
The surge in AI workloads has prompted enterprises to invest in powerful GPUs
and next-generation chips, reinventing data centers as strategic resources.
... As organizations race to tap progressively more sophisticated AI systems,
hardware decisions once again become integral to resilience, efficiency and
growth, while leading to more capable “edge” deployments closer to humans and
not just machines. As Tech Trends 2025 noted, “personal computers embedded
with AI chips are poised to supercharge knowledge workers by providing access
to offline AI models while future-proofing technology infrastructure, reducing
cloud computing costs, and enhancing data privacy.” ... Data is the bedrock of
effective AI, which is why “bad inputs lead to worse outputs—in other words,
garbage in, garbage squared,” as Deloitte’s 2024 State of Generative AI in the
Enterprise Q3 report observes. Fully 75% of surveyed organizations have
stepped up data-life-cycle investments because of AI. Layer a well-designed
data framework beneath AI, and you might see near-magic; rely on half-baked or
biased data, and you risk chaos. As a case in point, Vancouver-based LIFT
Impact Partners fine-tuned its AI assistants on focused, domain-specific data
to help Canadian immigrants process paperwork—a far cry from scraping the open
internet and hoping for the best.
What Happens to Relicensed Open Source Projects and Their Forks?
Several companies have relicensed their open source projects in the past few
years, so the CHAOSS project decided to look at how an open source project’s
organizational dynamics evolve after relicensing, both within the original
project and its fork. Our research compares and contrasts data from three case
studies of projects that were forked after relicensing: Elasticsearch with
fork OpenSearch, Redis with fork Valkey, and Terraform with fork OpenTofu.
These relicensed projects and their forks represent three scenarios that shed
light on this topic in slightly different ways. ... OpenSearch was forked from
Elasticsearch on April 12, 2021, under the Apache 2.0 license, by the Amazon
Web Services (AWS) team so that it could continue to offer this service to its
customers. OpenSearch was owned by Amazon until September 16, 2024, when it
transferred the project to the Linux Foundation. ... OpenTofu was forked from
Terraform on Aug. 25, 2023, by a group of users as a Linux Foundation project
under the MPL 2.0. These users were starting from scratch with the codebase
since no contributors to the OpenTofu repository had previously contributed to
Terraform.
Setting up a Security Operations Center (SOC) for Small Businesses
In today's digital age, security is not an option for any business
irrespective of its size. Small Businesses equally face increasing cyber
threats, making it essential to have robust security measures in place. A SOC
is a dedicated team responsible for monitoring, detecting, and responding to
cybersecurity incidents in real-time. It acts as the frontline defense against
cyber threats, helping to safeguard your business's data, reputation, and
operations. By establishing a SOC, you can proactively address security risks
and enhance your overall cybersecurity posture. The cost of setting up a SOC
for a small business may be prohibitive, in which case, the businesses may
look at engaging Managed Service Providers for the whole or part of the
services. ... Establishing clear, well-defined processes is vital for the
smooth functioning of your SOC. NIST Cyber Security Framework could be a good
fit for all businesses and one can define the processes that are essential and
relevant considering the size, threat landscape and risk tolerance of the
business. ... Continuous training and development are essential for keeping
your SOC team prepared to handle evolving threats. Offer regular training
sessions, certifications, and workshops to enhance their skills and
knowledge.
Quote for the day:
"Hardships often prepare ordinary
people for an extraordinary destiny." -- C.S. Lewis
No comments:
Post a Comment