Generative AI isn’t about what you think it is
ChatGPT and other generative artificial intelligence (AI) programs like DALL-E
are often thought of as a way to get rid of workers, but that isn’t their real
strength. What they really do well is improve on the work people turn out.
There’s often a conflict between doing something fast and doing it well — a
conflict generative AI could end by helping people become better and faster
creators. And clearly, if these tools were presented more as assistants rather
than as a replacement for people, the blowback we’ve seen (most recently in
court) could be tamped down. ... We usually measure productivity as the amount
of work done in a given time — without taking into account the quality of that
work. Typically, the faster you do something, the lower the quality. Quality in
and of itself is an interesting subject. I remember reading the book “Zen and
the Art of Motorcycle Maintenance,” which uses storytelling to explain how
quality is fluid and depends on the perception of the person observing it. For
instance, what’s considered high quality in a sweat shop would be completely
unacceptable in a Bentley factory.
Enterprises remain vulnerable through compromised API secrets
While many security teams assign specific entitlements to API keys, tokens, and
certificates, the survey discovered that more than 42% do not. That means
they’re granting all-or-nothing access to any users bearing these credentials,
which although is the path of least resistance in access management, also
increases the security risk. Corsha’s researchers also found that 50% of
respondents have little-to-no visibility into the machines, devices, or services
(i.e., clients) that leverage the API tokens, keys, or certificates that their
organizations are provisioning. Limited visibility can lead to secrets that are
forgotten, neglected, or left behind, making them prime targets for bad actors
to exploit undetected by traditional security tools and best practices. Another
red flag: although 54% of respondents rotate their secrets at least once a
month, 25% admit that they can take as long as a year to rotate secrets. The
long-lived, static nature of these bearer secrets make them prime targets for
adversaries, much like the static nature of passwords to online accounts.
The essential check list for effective data democratization
In many cases, only IT has access to data and data intelligence tools in
organizations that don’t practice data democratization. So in order to make
data accessible to all, new tools and technologies are required. Of course,
cost is a big consideration, says Orlandini, as well as deciding where to host
the data, and having it available in a fiscally responsible way. An
organization might also question if the data should be maintained on-premises
due to security concerns in the public cloud. But Kevin Young, senior data and
analytics consultant at consulting firm SPR, says organizations can first
share data by creating a data lake like Amazon S3 or Google Cloud Storage. ...
Most organizations don’t end up with data lakes, says Orlandini. “They have
data swamps,” he says. But data lakes aren’t the only option for creating a
centralized data repository. Another is through a data fabric, an architecture
and set of data services that provide a unified view of an organization’s
data, and enable integration from various sources on-premises, in the cloud
and on edge devices. A data fabric allows datasets to be combined, without the
need to make copies, and can make silos less likely.
Creating Great Psychologically Safe Teams
Conflict avoidance can be corrosive, even deadly, causing teams to miss
opportunities and needlessly exposing them to risk. Members might recognize
hazards but decline to bring them up, perhaps for fear of being seen as
throwing a colleague under the bus… No matter how sensitive the issue or how
serious the criticism, members must feel free to voice their thoughts
openly—though always constructively—and respond to critical input with
curiosity, recognizing that it is a crucial step toward a better solution.
Mamoli pointed out that "there is a lot of misunderstanding around
psychological safety," saying that "it doesn’t mean we’re super nice to each
other and feel comfortable all the time." She explained that the resulting
behaviour should be that teams "hold each other accountable" and can safely
provide direct feedback saying "this is what I need from you. Or you are not
doing this." She said that "this is what we need to remember psychological
safety really means."
Big Tech Behind Bars? The UK's Online Safety Bill Explained
One major criticism of the Online Safety Bill is that it poses a threat to
freedom of expression due to its potential for censoring legal content. Rights
organizations strongly opposed the requirement for tech companies to crack
down on content that was harmful but not illegal. An amendment in November
2022 removed mention of "lawful but harmful" content from the text, instead
obliging tech companies to introduce more sophisticated filter systems to
protect people from exposure to content that could be deemed harmful. Ofcom
will ensure platforms are upholding their terms of service. Child safety
groups opposed this amendment, claiming that it watered down the bill. But as
the most vocal proponents of the bill, their priority remains ensuring that
the legislation passes into law. Meanwhile, concerns over censorship continue.
An amendment to the bill introduced this week would make sharing videos that
showed migrants crossing the channel between France and the UK in "a positive
light" illegal. Tech companies would be required to proactively prevent users
from seeing this content.
Quantum Computing Owes Security Risk: Its Implications are Unfavorable
The foundation of quantum computing is quantum mechanics, which is
fundamentally different from classical computing. Bits are used in traditional
computing to process information, and they can only be in one of two states: 0
or 1. Quantum bits, or qubits, which can be in multiple states at once, are
used in quantum computing to process data. This enables quantum computers to
carry out some computations much more quickly than traditional computers. The
potential for quantum computing to defeat many of the encryption algorithms
currently in use to safeguard sensitive data is one of its most important
implications. Although encryption algorithms are made to be hard to crack,
they still depend on mathematical puzzles that can be solved by conventional
computers fairly quickly. Due to the speed at which quantum computing can
solve these issues, encryption can be broken much more quickly. The security
of sensitive data, including financial information, personal information, and
secrets of national security, is seriously impacted by this.
Combatting the ongoing issue of cyberattacks to the education sector
The growing threat of cyberattacks has underscored that organisations can no
longer depend on conventional perimeter-based defences to protect critical
systems and data. New regulations and industry standards are aimed at shifting
the cybersecurity paradigm – away from the old mantra of ‘trust but verify’
and instead towards a Zero Trust approach, whereby access to applications and
data is denied by default. Threat prevention is achieved by only granting
access to networks and workloads utilising policy informed by continuous,
contextual, risk-based verification across users and their associated devices.
There are many starting points on the path to Zero Trust. However, one driving
principle to determine your priority of implementation should be the knowledge
that the easiest way for cyberattackers to gain access to sensitive data is by
compromising a user’s identity. ... Furthermore, post-mortem analysis has
repeatedly found that compromised credentials are subsequently used to
establish a beachhead on an end-user endpoint, which typically serve as the
main point of access to an enterprise network.
How A Company’s Philosophy To ‘Shift Left’ Is Making Headway In The Data Privacy World
Whether privacy sits within legal, security, or both it is less important than
ensuring your privacy team is well-resourced and able to collaborate with the
organization as a whole. Key to this collaboration is making sure you have the
necessary legal and engineering staff to conduct privacy reviews and navigate
a rapidly evolving regulatory landscape. Separately, you need to overcome the
perception that privacy is an obstacle to productivity and get your product
and growth teams to see privacy as a competitive advantage that allows them to
build quickly and win consumer trust. Otherwise, pushback, low adoption, and
apathy will prevent you from making any real progress. To unify product
development with privacy standards, you have to make it impossibly easy for
product teams to comply with privacy standards. That means bringing the
privacy program directly into their process, right where they are already
working, as well as giving them easy-to-understand guardrails that let them
build quickly, without having to engage in a painful back and forth with the
privacy lawyers and engineers conducting privacy reviews.
Managing Expectations in Low-Code/No-Code Strategies
To guarantee a LC/NC strategy is successful, organizations must ensure there
is a bulletproof infrastructure, data governance and security system in place,
as well as full visibility into their data and applications. “As a first step,
enterprises must gain an understanding of their data -- what it is, where it
is and what it’s worth,” Mohan says. “From there, IT leaders can understand
where security and compliance vulnerabilities lay and then work to eliminate
these threats while ensuring sufficient oversight for potential legal and
contractual issues.” While the responsibility of developing a LC/NC strategy
falls, initially, on an enterprise’s CTO or CIO, Mohan advises tech leadership
should loop in experts in data security, data protection and governance to
address cyber and compliance threats and ensure employees are following proper
company and legal protocols. ... “Every level of leadership can decide to use
a low-code/no-code strategy, ranging from an engineering team manager who is
tasked with building products for the company, to a CTO setting the strategic
direction of the organization's engineering efforts,” he explains.
Attackers Crafted Custom Malware for Fortinet Zero-Day
The BoldMove backdoor, written in C, comes in two flavors: a Windows version
and a Linux version that the threat actor appears to have customized for
FortiOS, Mandiant said. When executed, the Linux version of the malware first
attempts to connect to a hardcoded command-and-control (C2) server. If
successful, BoldMove collects information about the system on which it has
landed and relays it to the C2. The C2 server then relays instructions to the
malware that ends with the threat actor gaining full remote control of the
affected FortiOS device. Ben Read, director of cyber-espionage analysis at
Mandiant, says some of the core functions of the malware, such as its ability
to download additional files or open a reverse shell, are fairly typical of
this type of malware. But the customized Linux version of BoldMove also
includes capabilities to manipulate specific features of FortOS. "The
implementation of these features shows an in-depth knowledge of the
functioning of Fortinet devices," Read says. "Also notable is that some of the
Linux variants features appear to have been rewritten to run on lower-powered
devices."
Quote for the day:
"It is the responsibility of
leadership to provide opportunity, and the responsibility of individuals to
contribute." -- William Pollard
No comments:
Post a Comment