The Top 5 Kubernetes Security Mistakes You’re Probably Making
Kubernetes configurations are primarily defined using YAML files, which are
human-readable data serialization standards. However, the simplicity of YAML
is deceptive, as small errors can lead to significant security
vulnerabilities. One common mistake is improper indentation or formatting,
which can cause the configuration to be applied incorrectly or not at all. ...
The ransomware attack on the Toronto Public Library revealed the critical
importance of network microsegmentation in Kubernetes environments. By
limiting network access to necessary resources only, microsegmentation is
pivotal in preventing the spread of attacks and safeguarding sensitive data.
... eBPF is the basis for creating a universal “security blanket” across
Kubernetes clusters, and is applicable on premises, in the public cloud and at
the edge. Its integration at the kernel level allows for immediate detection
of monitoring gaps and seamless application of security measures to new and
changing clusters. eBPF can automatically apply predefined security policies
and monitoring protocols to any new cluster within the environment.
Error-correction breakthroughs bring quantum computing a step closer
The best strategy, says Sam Lucero, chief quantum analyst at Omdia, would be
to combine multiple approaches to get the error rates down even further. ...
The bigger question is which type of qubit is going to become the standard –
if any. “Different types of qubits might be better for different types of
computations,” he says. This is where early testing can come in.
High-performance computing centers can already buy quantum computers, and
anyone with a cloud account can access one online. Using quantum computers via
a cloud connection is much cheaper and quicker. Plus, it gives enterprises
more flexibility, says Lucero. “You can sign on and say, ‘I want to use IonQ’s
trapped ions. And, for my next project, I want to use Regetti, and for this
other project, I want to use another computer.’” But stand-alone quantum
computers aren’t necessarily the best path forward for the long term, he adds.
“If you’ve got a high-performance computing capability, it will have GPUs for
one type of computing, quantum processing units for another type of computing,
CPUs for another type of computing – and it’s going to be transparent to the
end user,” he says. “The system will automatically parcel it out to the
appropriate type of processor.”
Is hybrid encryption the answer to post-quantum security?
One of the biggest debates is how much security hybridization offers. Much
depends on the details and the algorithm designers can take any number of
approaches with different benefits. There are several models for hybridization
and not all the details have been finalized. Encrypting the data first with
one algorithm and then with a second combines the strength of both,
essentially putting a digital safe inside a digital safe. Any attacker would
need to break both algorithms. However, the combinations don’t always deliver
in the same way. For example, hash functions are designed to make it hard to
identify collisions, that is two different inputs that produce the same
output: (x_1 and x_2, such that h(x_1)=h(x_2)). If the input of the first hash
function is fed into a second different hash function (say g(h(x))), it may
not get any harder to find a collision, at least if the weakness lies in the
first function. If two inputs to the first hash function produce the same
output, then that same output will be fed into the second hash function to
generate a collision for the hybrid system: (g(h(x_1))= g(h(x_2)) if
h(x_1)=h(x_2)). Digital signatures are also combined differently than
encryption. One of the simplest approaches is to just calculate multiple
signatures independently from each other.
By elevating partners’ service capabilities, we ensure they offer a comprehensive cybersecurity solution to enterprises in today’s dynamic threat landscape
The MSSPs have a significant opportunity for growth, with an increasing number
of partners showing interest in this domain. What’s notable is that our focus
isn’t solely on partners delivering network security solutions but also
extends to other offerings. For instance, our SIEM solutions now feature a
consumption-based model, attracting more partners to explore the realm of MSSP
partnerships. This trend has already gained momentum over the past year,
indicating a promising trajectory for the future. As the market continues to
expand, catering to a diverse range of customers across various sizes and
sectors, the demand for managed security services will only intensify. Here,
our integrator partners play a crucial role, positioned to capitalise on the
growing requirements of clients. Moreover, selected MSSP partners have the
opportunity to develop specialised services around Fortinet solutions,
leveraging programs like FortiDirect, FortiEDR, FortiWeb, and FortiMail. Our
offerings, such as the MSSP Monitor program and Flex VM program, provide
flexible consumption models tailored to the evolving needs of MSP
partners.
Early adopters’ fast-tracking gen AI into production, according to new report
One in four organizations say gen AI is critically important to gaining
increased productivity and efficiency. Thirty percent say improving customer
experience and personalization is their highest priority, and 26% say it’s the
technology’s potential to improve decision-making that matters most. ... “The
generative AI phenomenon has captured the attention of the market—and the
world—with both positive and negative connotations,” said Howard Dresner,
founder, and chief research officer at Dresner Advisory. “While generative AI
adoption remains nascent in the near term, a strong majority of respondents
indicate intentions to adopt it early or in the future.” ... Nearly half of
organizations consider data privacy to be a critical concern in their decision
to adopt gen AI. Legal and regulatory compliance, the potential for unintended
consequences, and ethics and bias concerns are also significant. Less than
half of respondents—46% and 43%, respectively—consider costs and
organizational policy important to generative AI adoption. Weaponized LLMs and
attacks on chatbots fuel fears over data privacy. More organizations are
fighting back and using gen AI to protect against chatbot leaks.
AI and data centers - Why AI is so resource hungry
Is it the data set, i.e. volume of data? The number of parameters used? The
transformer model? The encoding, decoding, and fine-tuning? The processing
time? The answer is of course a combination of all of the above. It is often
said that GenAI Large Language Models (LLMs) and Natural Language Processing
(NLP) require large amounts of training data. However, measured in terms of
traditional data storage, this is not actually the case. ... It is
thought that ChatGPT-3 was trained on 45 Terabytes of Commoncrawl plaintext,
filtered down to 570GB of text data. It is hosted on AWS for free as its
contribution to Open Source AI data. But storage volumes, the billions of web
pages or data tokens that are scraped from the Web, Wikipedia, and elsewhere
then encoded, decoded, and fine-tuned to train ChatGPT and other models,
should have no major impact on a data center. Similarly, the terabytes or
petabytes of data needed to train a text-to-speech, text to image or
text-to-video model should put no extraordinary strain on the power and
cooling systems in a data center built for hosting IT equipment storing and
processing hundreds or thousands of petabytes of data.
Making cloud infrastructure programmable for developers
Just as software-oriented architecture (SOA) evolved application architecture
from monolithic applications into microservices patterns, IaC has been the
slow-burn movement that is challenging what the base building blocks should be
for how we think of cloud infrastructure. IaC really got on the map in the
2010s, when Puppet, Chef, and Ansible introduced IaC methods for the
configuration of virtual machines. Chef was well-loved for allowing developers
to use programming languages like Ruby and for the reuse and sharing that came
with being able to use the conventions of a familiar language. During the next
decade, the IaC movement entered a new era as the public cloud provider
platforms matured, and Kubernetes became the de facto cloud operating model.
HashiCorp’s Terraform became the IaC poster child, introducing new
abstractions for the configuration of cloud resources and bringing a
domain-specific language (DSL) called HashiCorp Configuration Language (HCL)
designed to spare developers from lower-level cloud infrastructure plumbing.
A cloud-ready infra: Fundamental shift in how new-age businesses deliver value to customers
Cloud computing has emerged as a robust and secure platform for data storage,
offering unparalleled protection against extreme conditions and disasters.
Today’s cloud-based providers offer robust security and disaster recovery
capabilities, ensuring the safety and integrity of critical data assets.
... This includes empowering doctors and nurses to access patient records
securely on their own devices and facilitating remote consultations through
virtual desktop infrastructure (VDI). This instant access has transformed the
way healthcare professionals interact with patient data, allowing doctors to
review charts on tablets during rounds and nurses to retrieve medication
histories from any workstation. By storing data on secure servers rather than
end-client devices, cloud-based solutions guarantee the protection of critical
medical records in the event of theft or compromise of an end device. ... This
approach not only ensures data security but also meets the stringent
requirements of healthcare institutions while allowing for scalable systems
connected to the hospital’s network.
The Paradox of Productivity: How AI and Analytics are Shaping the Future of Work-Life Balance
One of the key challenges we face is managing time effectively in an
environment where the line between ‘on’ and ‘off’ hours is increasingly fuzzy.
AI-powered tools and analytics can generate insights and tasks
round-the-clock, leading to an ‘always-on’ work culture. This can encroach
upon personal time, making it challenging to disconnect and potentially
causing stress and burnout. Maintaining mental health in this context is
paramount. It is incumbent upon companies to ensure that the implementation of
AI and analytics tools does not exacerbate workplace stress. Instead, these
tools should be leveraged to promote a healthier work-life balance by
automating routine tasks, predicting workload peaks, and enabling flexible
working arrangements. Achieving personal fulfillment in the age of AI also
means embracing lifelong learning. As the nature of work evolves, so too must
our skillsets. Upskilling and reskilling become not just a means to
professional advancement but also an opportunity for personal growth and
satisfaction. Analytics can play a role here in identifying skill gaps and
learning opportunities that align with individual career paths and
interests.
The importance of a good API security strategy
Hackers love exploiting APIs for many reasons, but mostly because they let
them bypass security controls and access sensitive company and customer data
easily, as well as certain functionalities. A recent incident involving a
publicly exposed API of social media platform Spoutible could have ended in
attackers stealing users’ 2FA secrets, encrypted password reset tokens, and
more. This type of incident can result in a loss of customer and business
partners’ trust, consequently leading to financial loss and a drop in brand
value. Poor API security practices can also have regulatory and legal
consequences, cause disruption to company operations and even result in
intellectual property theft. ... A good API security strategy is essential for
every organization that wants to keep its digital assets safe and protect
sensitive customer data. OWASP constantly updates its list of the top 10 API
security threats. While security practitioners mustn’t rely solely on this
data, the list is still an essential tool when planning a security strategy
that will hold up. Adhering to the NIST Cybersecurity Framework is also an
essential step in planning a good API security strategy.
Quote for the day:
"The signs of outstanding leadership
are found among the followers." -- Max DePree
No comments:
Post a Comment