Quote for the day:
“You live longer once you realize that any time spent being unhappy is wasted.” -- Ruth E. Renkl
How to Prepare for Life After NB-IoT
Last November, the IoT world was caught off guard by AT&T’s announcement
to discontinue its support for Narrowband IoT (NB-IoT) by Q1 2025. For many,
this came as a big surprise. NB-IoT was considered the prodigy technology,
promising low power, long-range, and low-cost connectivity. While NB-IoT never
reached mass adoption in the US, this decision still struck as a blow for the
ones who did invest in the technology, and raised concerns about its validity
among people outside the US. ... Fortunately, most IoT modules supporting
NB-IoT also support LTE-M. Modules typically select the optimal network and
technology based on signal quality and internal radio settings. Devices with
roaming enabled will automatically switch networks or technologies if the
primary connection fails. Once AT&T shuts down its network, your devices
will automatically switch to another technology or network if set up
correctly. However, rather than waiting for the network to become unavailable,
you may want to stay in control of transitioning to another technology. This
also allows you to test the process with a subset of devices before rolling
out updates to your entire fleet. Assuming your cellular modules support
LTE-M, and you have remote access to update configurations, you can update the
radio access technology (RAT) using a simple AT Command.
Creating efficient, relevant, and lasting regulations requires several key
factors. First and foremost, policymakers need a working definition of the
object of their laws, which requires thorough work to capture the essence of
what will be affected by their text. This is a challenging task in the case of
AI because its definition remains in flux as the technology evolves. ...
unprecedented surge in generative AI’s popularity created uncertainty for
policymakers about how to navigate the new landscape. There was an urgent need
for frameworks, definitions, and language to fully understand the impact of
this technology and how to frame it. As the technology outpaced expectations,
earlier regulatory efforts to address these tools quickly became inadequate
and obsolete, leaving policymakers scrambling to catch up. This is precisely
the situation Chinese regulators faced in their initial efforts to address the
generative AI sector. The basic provisions outlined in the law were
insufficient to address the profound societal impacts of generative AI’s
widespread adoption. The attempt to establish China as an early player in AI
regulation was overtaken by the pace of technological progress and
private-sector innovation, rendering even the terminology obsolete.
How to Simplify Automated Security Testing in CI/CD Pipelines
Dependency management is where many teams stumble, and we’ve all seen the
fallout from poorly managed libraries (hello, Log4Shell). Automating
dependency checks is not an option, it’s a must. Tools like Dependabot, OWASP
Dependency-Check, and Renovate take the grunt work out of monitoring for
vulnerabilities, raising alerts, and even creating pull requests to fix
issues. Imagine a Node.js team drowning in a sea of outdated packages. With
Dependabot hooked into their GitHub workflow, every vulnerability gets an
automatic pull request to update to a safe version. No manual labor, no
guessing games—just a steady rhythm of secure, up-to-date code. Go deeper by
leveraging Software Composition Analysis (SCA) tools that don’t just look at
direct dependencies but dive into the murky waters of transitive dependencies
too. ... Instead of vague warnings like “Potential SQL Injection Found,”
imagine getting, “SQL Injection vulnerability detected at line 45 in
user_controller.js. Here’s how to fix it…” Tools like CodeQL and Semgrep do
precisely this. They integrate directly into CI pipelines, flag issues,
suggest fixes, and provide links to further reading, all without overwhelming
the dev team.
Security automation and integration can smooth AppSec friction
Whether you perceive friction between development and security testing to be
an impediment or not often depends on your role in the organization. Of the
AppSec team members who responded to the survey used for “Global State of
DevSecOps” report, 65% felt that that testing impeded pipelines “moderately”
or “severely.” While the report didn’t survey why they feel this way, we can
speculate that it’s due to their proximity to the testing process, or
potentially because they’re feeling pressure to accelerate review processes.
Since they are closest to the task, they face the highest scrutiny for its
efficiency. Of the development and engineering team members who replied to the
survey, 58% share the sentiment of their AppSec counterparts. It is, however,
important to consider that an additional 12% of the surveyed developers and
engineers report that they just don’t have enough visibility into security
testing to know what’s going on. Were they to have greater visibility into
security testing processes, it is quite possible that they, too, would
perceive AppSec testing as an impediment to pipelines. And this lack of
visibility makes concerted DevSecOps initiatives more difficult to implement
since contributors are unable to close feedback loops or optimize development
and testing efforts.
The Power of Many: Crowdsourcing as A Game-Changer for Modern Cyber Defense
Although shared expertise significantly boosts threat detection & hunting
efficiency while simultaneously empowering cybersecurity education, there are
several stumbling blocks to address on the way to building global
crowdsourcing initiatives. While working towards a safer future, contributors
to crowdsourced efforts often face issues related to intellectual property
rights and the recognition of the significance of individual contributions
within the professional network. Ensuring proper recognition for discoveries
and contributions to global cyber defense at all levels, from the support of
author attribution in the code of a detection rule to sharable digital
credentials issued by organizations to recognize exceptional individual
involvement and contributions to the crowdsourcing initiatives, is essential
to maintaining motivation and fairness. Another challenge is adherence to
privacy imperative and compliance with security regulations, including TLP
protocol, while sharing information with a wide audience, since disclosure of
sensitive information about vulnerabilities or cyber attacks can pose
significant risks both to crowdsourcing program contributors and
beneficiaries.
How to Harness the Power of Fear and Transform It Into a Leadership Strength
One of the most powerful ways to address fear is to reframe it as a perception
rather than an absolute truth. Fear does not objectify threats; it is just one
of the mental faculties. By reframing it as a perception, a leader can make
proper decisions, attacking the instances of fear. Refocusing the process does
not stop fear; it changes the process. Leaders are able to go from impulsive
to composed behavior by understanding that fear is a conceptual state rather
than an actual one. Calm neurotransmitters like serotonin and endorphins take
the role of stress chemicals like cortisol and adrenaline, promoting emotional
equilibrium and resilience. For leaders, that shift can be radical. By
approaching challenges and approach with strength and rationality, not fear,
they can spread the ripple effect into their companies. It's a way of creating
an environment in which teams feel empowered, excited and pushed to grow and
thrive. ... Recognizing fear as a perceived threat allows leaders to respond
with reason and confidence. Mastering fear is a critical leadership skill,
fostering innovation and collaboration. By transforming fear into a tool for
growth, leaders unlock their full potential and inspire others, paving the way
for sustained progress.
Nuclear-Powered Data Centers: When Will SMRs Finally Take Off?
Taking stock of the nuclear-powered data center market in 2024, Alan Howard,
principal analyst of cloud and colocation services at Omdia,* said: “It’s
nothing short of exciting that Amazon, Google, and Microsoft have all signed
deals for nuclear power… and Meta is publicly on the hunt.” Still, these deals
are relatively small by the standards of the data center industry, and Howard
cautioned against impatience, citing the mid-2030s as the earliest we can
expect to see broad commercial availability of nuclear energy in powering data
centers. “The reality is that these [nuclear reactors under construction] are
essentially test reactors which is part of the long regulatory road nuclear
technology companies must follow.” ... One of the chief challenges facing data
center companies is the five-to-seven-year permitting and construction
timelines for nuclear facilities, according to Ryan Mallory, COO at data
center firm Flexential. “Data center companies must begin securing permits,
ground space, and operational expertise to prepare for SMRs to become scalable
and repeatable by the 2030s,” Mallory said. There are also technological
challenges, according to Steven Carlini, chief data center and AI advocate at
Schneider Electric. “Integrating SMRs into the existing ecosystem will be
complex,” he said.
13 Cybersecurity Predictions for 2025
AI capabilities are awesome, yet I’m finding that most of the AI capabilities
being developed are focused on just getting them to work and into the
marketplace as soon as possible. We need to do a much better job of
incorporating cybersecurity best practices and secure-by-design principles
into the creation, operation, and sustainment of AI systems. The AI Security
and Incident Response Team (AISIRT)[ii] here at the Software Engineering
Institute has discovered numerous material weaknesses and flaws in AI
capabilities resulting in vulnerabilities that can be leveraged by hostile
entities. AI vulnerabilities are cyber vulnerabilities, and the list of
reported vulnerabilities continue to grow. Software engineers are trained to
incorporate secure-by-design principles into their work. But neural-network
models, including generative AI and LLMs, bring along a wide range of
additional kinds of weaknesses and vulnerabilities, and for many of these it
is a struggle to develop effective remediations. Until the AI community is
able to develop AI-appropriate secure-by-design best practices to augment the
secure-by-design practices already familiar to software engineers, I believe
we’ll see preventable cyber incidents affecting AI capabilities in 2025. ...
Ransomware criminal activity continues to feast on the cyber poor. Cyber
criminals have been feasting on those who operate below the cyber poverty
line.
Biometrics Institute identifies dire need for clear language in biometrics and AI
Most biometrics experts agree that no one is exactly sure what anyone is
talking about. The Biometrics Institute is trying to help, via its Explanatory
Dictionary, a resource that aims to capture the nuances in biometric
terminology, “considering both formal definitions and how they are perceived
by the public – for example, how someone might explain biometrics or AI to a
friend.” Because, as of now, there isn’t a standard that is universally
agreed-on, nor is there really a clear way to explain biometrics and AI to
your neighbour Ted who works in marketing. “There are no universal definitions
of biometrics or AI and those put forward by ISO and some governments are
either too technical, obtuse or are not fully aligned with one another or are
hidden behind paywalls and not accessible to the majority of the general
public.” The paper drills down on the semantics of biometric grammar. What
does it mean for a biometric application to “have AI”? Conflation of certain
terms in both regulatory and public contexts exacerbates the problem. Media
struggles to pick apart the web of language, and contributes its own strands
in the process. Is a tool “AI-driven,” or “AI-equipped”? Where do algorithms
fit in?
How AI Copilots Are Transforming Threat Detection and Response
The rise of AI copilots in cybersecurity is a transformative moment, but it
requires a shift in mindset. Security teams should view these tools as
partners, not replacements. AI copilots excel at processing vast datasets and
identifying patterns, but humans are irreplaceable when it comes to judgment
and understanding context. The future of cybersecurity lies in this hybrid
approach, where AI enhances human capabilities rather than attempting to
replicate them. Business leaders should focus on fostering this collaboration,
equipping their teams with the skills and tools needed to work effectively
with AI. Additionally, transparency is non-negotiable. Teams must understand
how their AI copilots make decisions, ensuring accountability and reducing the
risk of errors. This also involves rigorous testing and ongoing monitoring to
detect and mitigate biases or vulnerabilities before they can be exploited.
... By empowering security teams with advanced capabilities, businesses can
stay ahead of adversaries and secure a resilient future. Looking ahead, AI
copilots are just the beginning. As these tools become more advanced, they
will evolve beyond copilots into more autonomous AI agents—a shift often
referred to as agentic AI.
No comments:
Post a Comment