Putting Threat Modeling Into Practice: A Guide for Business Leaders
One of the primary benefits of threat modeling is its ability to reduce the
number of defects that make it to production. By identifying potential threats
and vulnerabilities during the design phase, companies can implement security
measures that prevent these issues from ever reaching the production
environment. This proactive approach not only improves the quality of products
but also reduces the costs associated with post-production fixes and patches.
... Along similar lines, threat modeling can help meet obligations defined in
contracts if those contracts include terms related to risk identification and
management. ... Beyond obligations linked to compliance and contracts, many
businesses also establish internal IT security goals. For example, they might
seek to configure access controls based on the principle of least privilege or
enforce zero-trust policies on their networks. Threat modeling can help to put
these policies into practice by allowing organizations to identify where their
risks actually lie. From this perspective, threat modeling is a practice that
the IT organization can embrace because it helps achieve larger goals – namely,
those related to internal governance and security strategy.
How Cloud Custodian conquered cloud resource management
Everybody knows the cloud bill is basically rate multiplied by usage. But
while most enterprises have a handle on rate, usage is the hard part. You have
different application teams provisioning infrastructure. You go through code
reviews. Then when you get to five to 10 applications, you get past the point
where anyone can possibly know all the components. Now you have containerized
workloads on top of more complex microservices architectures. And you want to
be able to allow a combination of cathedral (control) and bazaar (freedom of
technology choice) governance, especially today with AI and all of the new
frameworks and LLMs [large language models]. At a certain point you lose the
script to be able to follow all of this in your head. There are a lot of tools
to enable that understanding — architectural views, network service maps,
monitoring tools — all feeling out different parts of the elephant versus
giving an organization a holistic view. They need to know not only what’s in
their cloud environment, but what’s being used, what’s conforming to policy,
and what needs to be fixed, and how. That’s what Cloud Custodian is for — so
you can define the organizational requirements of your applications and map
those up against cloud resources as policy.
5 Steps to Identify and Address Incident Response Gaps
To compress the time it takes to address an incident, it’s not enough to stick
to the traditional eyes-on-glass model that network operations centers (NOCs)
traditionally privilege. It’s too human-intensive and error-prone to
effectively triage an increasingly overwhelming volume of data. To go from
event to resolution with minimal toil and increased speed, teams can leverage
AI and automation to deflect noise, surface only the most critical alerts and
automate diagnostics and remediations. Generative AI can amplify that effect:
For teams collaborating in ChatOps tools, common diagnostic questions can be
used as prompts to get context and accelerate action. ... When an incident
hits, teams spend too much time gathering information and looping in numerous
people to tackle it. Generative AI can be used to quickly summarize key data
about the incident and provide actionable insights at every step of the
incident life cycle. It can also supercharge the ability to develop and deploy
automation jobs faster, even by non-technical teams: Operators can translate
conversational prompts into proposed runbook automation or leverage
pre-engineered prompts based on common categories.
DevOps with OpenShift Pipelines and OpenShift GitOps
Unlike some other CI solutions, such as legacy tool Jenkins, Pipelines is
built on native Kubernetes technologies and thus is resource efficient since
pipelines and tasks are only actively running when needed. Once the pipeline
has completed no resources are consumed by the pipeline itself. Pipelines and
tasks are constructed using a declarative approach following standard
Kubernetes practices. However, OpenShift Pipelines includes a user-friendly
interface built into the OpenShift console that enables users to easily
monitor the execution of the pipelines and view task logs as needed. The user
interface also shows metrics for individual task execution, enabling users to
better optimize pipeline performance. In addition, the user interface enables
users to quickly create and modify pipelines visually. While users are
encouraged to store tasks and Pipeline resources in Git, the ability to
visually create and modify pipelines greatly reduces the learning curve and
makes the technology approachable for new users. You can leverage
pipelines-as-code to provide an experience that is tightly integrated with
your backend Git provider, such as GitHub or GitLab.
Rethinking enterprise architects’ roles for agile transformation
Mounting technical debt and extending the life of legacy systems are key risks
CIOs should be paranoid about. The question is, how should CIOs assign
ownership to this problem, require that technical debt’s risks are
categorized, and ensure there’s a roadmap for implementing remediations? One
solution is to assign the responsibility to enterprise architects in a product
management capacity. Product managers must define a vision statement that
aligns with strategic and end-user needs, propose prioritized roadmaps, and
oversee an agile backlog for agile delivery teams. ... Enterprise architects
who have a software development background are ideal candidates to assume the
delivery leader role and can steer teams toward developing platforms with
baked-in security, performance, usability, and other best practices. ...
Enterprise architects assuming a sponsorship role in these initiatives can
help steer them toward force-multiplying transformations that reduce risks and
provide additional benefits in improved experiences and better
decision-making. CIOs who want enterprise architects to act as sponsors should
provide them with a budget and oversee the development of a charter for
managing investment priorities.
The best way to regulate AI might be not to specifically regulate AI. This is why
Most of the potential uses of AI are already covered by existing rules and
regulations designed to do things such as protect consumers, protect privacy
and outlaw discrimination. These laws are far from perfect, but where they are
not perfect the best approach is to fix or extend them rather than introduce
special extra rules for AI. AI can certainly raise challenges for the laws we
have – for example, by making it easier to mislead consumers or to apply
algorithms that help businesses to collude on prices. ... Finally, there’s a
lot to be said for becoming an international “regulation taker”. Other
jurisdictions such as the European Union are leading the way in designing
AI-specific regulations. Product developers worldwide, including those in
Australia, will need to meet those new rules if they want to access the EU and
those other big markets. If Australia developed its own idiosyncratic
AI-specific rules, developers might ignore our relatively small market and go
elsewhere. This means that, in those limited situations where AI-specific
regulation is needed, the starting point ought to be the overseas rules that
already exist. There’s an advantage in being a late or last mover.
How LLMs on the Edge Could Help Solve the AI Data Center Problem
Anyone interacting with an LLM in the cloud is potentially exposing the
organization to privacy questions and the potential for a cybersecurity
breach. As more queries and prompts are being done outside the enterprise,
there are going to be questions about who has access to that data. After all,
users are asking AI systems all sorts of questions about their health,
finances, and businesses. To do so, these users often enter personally
identifiable information (PII), sensitive healthcare data, customer
information, or even corporate secrets. The move toward smaller LLMs that can
either be contained within the enterprise data center – and thus not running
in the cloud – or that can run on local devices is a way to bypass many of the
ongoing security and privacy concerns posed by broad usage of LLMs such as
ChatGPT. ... Pruning the models to reach a more manageable number of
parameters is one obvious way to make them more feasible on the edge. Further,
developers are shifting the GenAI model from the GPU to the CPU, reducing the
processing footprint, and building standards for compiling. As well as the
smartphone applications noted above, the use cases that lead the way will be
those that are achievable despite limited connectivity and bandwidth,
according to Goetz.
'Good complexity' can make hospital networks more cybersecure
Because complicated systems have structures, Tanriverdi says, it's difficult
but feasible to predict and control what they'll do. That's not feasible for
complex systems, with their unstructured connections. Tanriverdi found that as
health care systems got more complex, they became more vulnerable. ... The
problem, he says, is that such systems offer more data transfer points for
hackers to attack, and more opportunities for human users to make security
errors. He found similar vulnerabilities with other forms of complexity,
including:Many different types of medical services handling health data.
Decentralizing strategic decisions to member hospitals instead of making them
at the corporate center. The researchers also proposed a solution: building
enterprise-wide data governance platforms, such as centralized data
warehouses, to manage data sharing among diverse systems. Such platforms would
convert dissimilar data types into common ones, structure data flows, and
standardize security configurations. "They would transform a complex system
into a complicated system," he says. By simplifying the system, they would
further lower its level of complication.
Threats by Remote Execution and Activating Sleeper Devices in the Context of IoT and Connected Devices
As the Internet of Things proliferates, the number of connected devices in
both civilian and military contexts is increasing exponentially. From smart
homes to military-grade equipment, the IoT ecosystem connects billions of
devices, all of which can potentially be exploited by adversaries. The pagers
in the Hezbollah case, though low-tech compared to modern IoT devices,
represent the vulnerability of a system where devices are remotely
controllable. In the IoT realm, the stakes are even higher, as everyday
devices like smart thermostats, security cameras, and industrial equipment are
interconnected and potentially exploitable. In a modern context, this
vulnerability could be magnified when applied to smart cities, critical
infrastructure, and defense systems. If devices such as power grids, water
systems, or transportation networks are connected to the internet, they could
be subjected to remote control by malicious actors. ... One of the most
alarming aspects of this situation is the suspected infiltration of the supply
chain. The pagers used by Hezbollah were reportedly tampered with before being
delivered to the group, likely with explosives embedded within the devices.
Detecting vulnerable code in software dependencies is more complex than it seems
A “phantom dependency” refers to a package used in your code that isn’t
declared in the manifest. This concept is not unique to any one language (it’s
common in JavaScript, NodeJS, and Python). This is problematic because you
can’t secure what you can’t see. Traditional SCA solutions focus on manifest
files to identify all application dependencies, but those can both be under-
or over-representative of the dependencies actually used by the application.
They can be under-representative if the analysis starts from a manifest file
that only contains a subset of dependencies, e.g., when additional
dependencies are installed in a manual, scripted or dynamic fashion. This can
happen in Python ML/AI applications, for example, where the choice of packages
and versions often depend on operating systems or hardware architectures,
which cannot be fully expressed by dependency constraints in manifest files.
And they are over-representative if they contain dependencies not actually
used. This happens, for example, if you dump the names of all the components
contained in a bloated runtime environment into a manifest file
Quote for the day:
"An accountant makes you aware but a
leader makes you accountable." -- Henry Cloud
No comments:
Post a Comment