Quote for the day:
"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous
The CISO’s role in advancing innovation in cybersecurity
CISOs must know the risks of adopting untested solutions, keeping in mind
their organization’s priorities and learning how to evaluate new tools and
technologies. “We also ensure both parties have clear, shared goals from the
start, so we avoid misunderstandings and set everyone up for success,” Maor
tells CSO. ... It’s a golden era of cybersecurity innovation driven by
emerging cybersecurity threats, but it’s a tale of two companies, according to
Perlroth. AI is attracting significant amounts of funding while it’s harder
for many other types of startups. Cybersecurity companies continue to get a
lot of interest from venture capital (VC) firms, although she’s seeing
founders themselves eschewing big general funds in favor of funds and
investors with industry knowledge. “Startup founders frequently want to work
with venture capitalists who have some kind of specific value add or cyber
expertise,” says Perlroth. In this environment, there’s more potential for
CISOs to be involved and those with an appetite for the business side of cyber
innovation can look for opportunities to advise and invest in new businesses.
Cyber-focused venture capital (VC) firms often engage CISOs to participate in
advisory panels and assist with due diligence when vetting startups, according
to Haleliuk.
The risks of supply chain cyberattacks on your organisation
Organisations need to ensure they take steps to prevent the risk of key
suppliers falling victim to cyberattacks. A good starting point is to work out
just where they are most exposed, says Lorri Janssen-Anessi, director of
external cyber assessments at BlueVoyant. “Understand your external attack
surface and third-party integrations to ensure there are no vulnerabilities,”
she urges. “Consider segmentation of critical systems and minimise the blast
radius of a breach. Identify the critical vendors or suppliers and ensure
those important digital relationships have stricter security practices in
place.” Bob McCarter, CTO at NAVEX, believes there needs to be a stronger
emphasis on cybersecurity when selecting and reviewing suppliers. “Suppliers
need to have essential security controls including multi-factor
authentication, phishing education and training, and a Zero Trust framework,”
he says. “To avoid long-term financial loss, they must also adhere to relevant
cybersecurity regulations and industry standards.” But it’s also important to
regularly perform risk assessments, even once the relationship is established,
says Janssen-Anessi. “The supply chain ecosystem is not static,” she warns.
“Networks and systems are constantly changing to ensure usability. To stay
ahead of vulnerabilities or risks that may pop up, it is important to
continuously monitor these suppliers.”
Deepseek's AI model proves easy to jailbreak - and worse
On Thursday, Unit 42, a cybersecurity research team at Palo Alto Networks,
published results on three jailbreaking methods it employed against several
distilled versions of DeepSeek's V3 and R1 models. ... "Our research findings
show that these jailbreak methods can elicit explicit guidance for malicious
activities," the report states. "These activities include keylogger creation,
data exfiltration, and even instructions for incendiary devices, demonstrating
the tangible security risks posed by this emerging class of attack."
Researchers were able to prompt DeepSeek for guidance on how to steal and
transfer sensitive data, bypass security, write "highly convincing"
spear-phishing emails, conduct "sophisticated" social engineering attacks, and
make a Molotov cocktail. They were also able to manipulate the models into
creating malware. ... "While information on creating Molotov cocktails and
keyloggers is readily available online, LLMs with insufficient safety
restrictions could lower the barrier to entry for malicious actors by
compiling and presenting easily usable and actionable output," the paper adds.
... "By circumventing standard restrictions, jailbreaks expose how much
oversight AI providers maintain over their own systems, revealing not only
security vulnerabilities but also potential evidence of cross-model influence
in AI training pipelines," it continues.
10 skills and traits of successful digital leaders
An important skill for CIOs is strategic thinking, which means adopting a “why”
mindset, notesGill Haus, CIO of consumer and community banking at JPMorgan
Chase. “I ask questions all the time — even on subjects I think I’m most
knowledgeable about,” Haus says. “When others see their leader asking questions,
even in the company of more senior leaders, it creates a welcoming atmosphere
that encourages everyone to feel safe doing the same. ... Effective leaders have
a clear vision of what technology can do for their organization as well as a
solid understanding of it, agrees Stephanie Woerner, director and principal
research scientist at the MIT’s Center for Information Systems Research (CISR).
“They think about the new things they can do with technology, different ways of
getting work done or engaging with customers, and how technology enables that.”
... Being able to translate complex technical concepts into clear business value
while also maintaining realistic implementation timelines is another important
skill. Tech leaders are up to their eyeballs in data, systems, and processes,
but all users want is that a product works. A strong digital leader should
constantly ask themselves how they can make something easier for their
customers.
Prompt Injection for Large Language Models
Many businesses put all of their secrets into the system prompt, and if you're
able to steal that prompt, you have all of their secrets. Some of the companies
are a bit more clever, and they put their data into files that are then put into
the context or referenced by the large language model. In these cases, you can
just ask the model to provide you links to download the documents it knows
about. Sometimes there are interesting URLs pointing to internal documents, such
as Jira, Confluence, and the like. You can learn about the business and its data
that it has available. That can be really bad for the business. Another thing
you might want to do with these prompt injections is to gain personal
advantages. Imagine a huge company, and they have a big HR department, they
receive hundreds of job applications every day, so they use an AI based tool to
evaluate which candidates are a fit for the open position. ... Another approach
to make your models less sensitive to prompt injection and prompt stealing is to
fine-tune them. Fine-tuning basically means you take a large language model that
has been trained by OpenAI, Meta, or some other vendor, and you retrain it with
additional data to make it more suitable for your use case.
The hidden dangers of a toxic cybersecurity workplace
Certain roles in cybersecurity are more vulnerable to toxic environments due to
the nature of their responsibilities and visibility within the organization. SOC
analysts, for instance, are often on the frontlines, dealing with high-pressure
situations like incident response and threat mitigation. The expectation to
always be “on” can lead to burnout, especially in a culture that prioritizes
output over well-being. Similarly, CISOs face unique challenges as they balance
technical, strategic, and political pressures. They’re often caught between
managing expectations from the C-suite and addressing operational realities.
CISO burnout is very real, driven in part by the immense liability and scrutiny
associated with the role. The constant pressure, combined with the growing
complexity of threats, leads many CISOs to leave their positions, with some even
vowing, “never again will I do this job.” This trend is tragic, as organizations
lose experienced leaders who play a critical role in shaping cybersecurity
strategies. ... Leaders play a crucial role in fostering a positive culture and
must take proactive steps to address toxicity. They should prioritize open
communication and actively solicit feedback from their teams on a regular basis.
Anonymous surveys, one-on-one meetings, and team discussions can help identify
pain points.
The Cultural Backlash Against Generative AI
Part of the problem is that generative AI really can’t effectively do everything
the hype claims. An LLM can’t be reliably used to answer questions, because it’s
not a “facts machine”. It’s a “probable next word in a sentence machine”. But
we’re seeing promises of all kinds that ignore these limitations, and tech
companies are forcing generative AI features into every kind of software you can
think of. People hated Microsoft’s Clippy because it wasn’t any good and they
didn’t want to have it shoved down their throats — and one might say they’re
doing the same basic thing with an improved version, and we can see that some
people still understandably resent it. When someone goes to an LLM today and
asks for the price of ingredients in a recipe at their local grocery store right
now, there’s absolutely no chance that model can answer that correctly,
reliably. That is not within its capabilities, because the true data about those
prices is not available to the model. The model might accidentally guess that a
bag of carrots is $1.99 at Publix, but it’s just that, an accident. In the
future, with chaining models together in agentic forms, there’s a chance we
could develop a narrow model to do this kind of thing correctly, but right now
it’s absolutely bogus. But people are asking LLMs these questions today! And
when they get to the store, they’re very disappointed about being lied to by a
technology that they thought was a magic answer box.
Developers: The Last Line of Defense Against AI Risks
Considering security early in the software development lifecycle has not
traditionally been a standard practice amongst developers. Of course, this
oversight is a goldmine for cybercriminals who exploit ML models to inject
harmful malware into software. The lack of security training for developers
makes the issue worse, particularly when AI-generated code, trained on
potentially insecure open source data, is not adequately screened for
vulnerabilities. Unfortunately, once AI/ML models integrate such code, the
potential for undetected exploits only increases. Therefore, developers must
also function as security champions, and DevOps and Security can no longer be
considered separate functions. ... As AI continues to be implemented at scale by
different teams, the need for advanced security in ML models is key. Enter the
“Shift Left” approach, which advocates for integrating security measures early
in the software lifecycle to get ahead and prevent as many future
vulnerabilities as possible and ensure comprehensive security throughout the
development process. This strategy is critical in AI/ML development, before
they’re even deployed, to ensure the security and compliance of code and models,
which often come from external sources and sometimes cannot be trusted.
How Leaders Can Leverage AI For Data Management And Decision-Making
“The real challenge isn’t just the cost of storing data—it’s making sense of
it,” explains Nilo Rahmani, CEO of Thoras.ai. “An estimated 80% of incident
resolution time is spent simply identifying the root cause, which is a costly
inefficiency that AI can help solve.” AI-powered analytics can detect patterns,
predict failures, and automate troubleshooting, reducing downtime and improving
reliability. By leveraging AI, companies can streamline their data operations
while increasing speed and accuracy in decision-making. Effective data
management extends beyond simple storage—it requires real-time intelligence to
ensure organizations are using the right data at the right time. AI plays a
critical role in distinguishing meaningful data from noise, helping companies
focus on insights that drive growth. ... AI is poised to revolutionize data
management, but success will depend on how well organizations integrate it into
their existing frameworks. Companies that embrace AI-driven automation,
predictive analytics, and proactive infrastructure management will not only
reduce costs but also gain a competitive edge by making faster, smarter
decisions. Leaders must shift their focus from simply collecting and storing
data to using it intelligently.
Ramping Up AI Adoption in Local Government
One of the biggest barriers stopping local authorities from embracing AI is the
lack of knowledge and misunderstanding around the technology. For many years the
fear of the unknown has caused confusion, with numerous news articles claiming
modern technology poses a threat to humanity. This could not be further from the
truth. ... One key area that is missing from the AI Opportunities Actions Plan
is managing and upskilling workers. People are core to every transformation,
even ones that are digitally focused. To truly unlock the power of AI, employees
need to be supported and trained in a judgement free space, allowing them to
disclose any concerns or areas of support. After years of fear-mongering some
employees may be more hesitant to engage with an AI transformation. Therefore,
it’s up to leaders to adopt a top-down approach to promoting and embracing AI in
the workplace. To begin, a skills audit should be conducted, assessing the
existing knowledge and experiences with AI-related skills. Based on this,
customised training plans can be developed to ensure everyone within the
organisation feels supported and confident. It’s important for leaders to
emphasise that a digital transformation doesn’t mean job cuts, but rather, takes
away the time-consuming jobs and allows staff to focus on higher value, creative
and strategic work.
No comments:
Post a Comment