Quote for the day:
"If it wasn't hard, everyone would do it, the hard is what makes it great." -- Tom Hanks
Privacy Puzzle: Are Businesses Ready for the DPDP Act?

The State of Data Privacy in India 2024 report shows mixed responses. While
56% of businesses think the DPDP Act addresses key privacy issues, 30% are
unsure and 14% remain skeptical. Even more troubling, more than 82% of
companies lack transparency in handling data, raising serious trust concerns.
... smaller businesses, such as micro, small and medium enterprises, or MSMEs,
and startups, often struggle due to limited resources. Many rely on IT or
legal teams to oversee privacy initiatives, with some lacking any formal
governance structures. This fragmented approach poses significant risks,
especially as these organizations are equally subject to regulatory scrutiny
under the DPDP Act. ... Third-party risk is another critical concern. Many
enterprises depend on vendors for essential services, yet only 38% use a
combination of risk assessments and contractual obligations to manage
third-party privacy risks. Eight percent of organizations lack any significant
measures, leaving them exposed to potential data leaks and regulatory
penalties. ... Despite progress made in privacy staffing and strategy
alignment, privacy professionals are experiencing increased stress within a
complex compliance and risk landscape, according to new research from
ISACA.
CISOs: Stop trying to do the lawyer’s job

“It’s good to be mindful in advance of the security and privacy requirements
in the jurisdictions the organization is operating within, and to prepare
possible responses should there be incidents that violate those laws and how
to respond to those,” says Christine Bejerasco, CISO at WithSecure. Of course,
the conversation between the two parties can go smoothly if there’s an
existing relationship. If not, that relationship should be built. “Reaching
out to legal experts should be as straightforward as reaching out to another
colleague,” Bejerasco adds. “Just talk to them directly.” ... Some CISO have a
legal background of have an extensive amount of experience working with
general counsel. However, this does not mean they should act as legal advisors
or take on responsibilities outside their role. “It is important to respect
boundaries and not overstep job functions,” says Stacey Cameron, CISO at
Halcyon. “There’s nothing wrong with differing opinions, interpretations, or
healthy discussions, but for legal matters, it will be the lawyers’
responsibility to make a case on behalf of the company, so we need to respect
each other’s roles and stay in our respective lanes.” According to Cameron,
overstepping boundaries is one of the biggest mistakes CISOs can make, when
they are trying to build a relationship with their organizations’s
lawyers.
Inside Monday’s AI pivot: Building digital workforces through modular AI

The initial deployment of gen AI at Monday didn’t quite generate the return on
investment users wanted, however. That realization led to a bit of a rethink
and pivot as the company looked to give its users AI-powered tools that
actually help to improve enterprise workflows. That pivot has now manifested
itself with the company’s “AI blocks” technology and the preview of its
agentic AI technology that it calls “digital workforce.” Monday’s AI journey,
for the most part, is all about realizing the company’s founding vision. “We
wanted to do two things, one is give people the power we had as developers,”
Mann told VentureBeat in an exclusive interview. “So they can build whatever
they want, and they feel the power that we feel, and the other end is to build
something they really love.” ... Simply put, AI functionality needs to be in
the right context for users — directly in a column, component or service
automation. AI blocks are pre-built AI functions that Monday has made
accessible and integrated directly into its workflow and automation tools. For
example, in project management, the AI can provide risk mapping and
predictability analysis, helping users better manage their projects.
Courting Global Talent: How can Web3 Startups Attract the Best Developers in the World?

Any company without concrete values guiding its recruitment will often hire
quickly and in the end obtain regrettable results. Web3 projects are no
exception. Fortunately, there are a number of pre-established values in Web3
that can help offset this tendency: community, inclusivity, sustainability, and
collaboration. These beliefs should be the guiding frameworks behind any Web3
startup's hiring policy, enabling them to assess candidates with a clear
understanding of whether the applicant's character aligns with the company's
DNA. High-performing people are needed in Web3 who can not only bring their own
unique experiences to an organisation, but whose broader values very much align
with the company's guiding principles. The focus of any hiring strategy should
never be quantity over quality, as this will almost always result in
disappointment and wasted time. Hiring people who are the right fit - measured
by how well the candidate exemplifies the company's overarching values - should
be non-negotiable. Likewise, transparency, another of Web3's core tenets, should
be baked into every step of the hiring funnel, and it comes in two modes.
Firstly, Web3 companies should be aware of their unique value proposition and
amplify this in their external marketing efforts.
Is DOGE a cybersecurity threat? A security expert explains the dangers of violating protocols and regulations

Traditionally, the purpose of cybersecurity is to ensure the confidentiality and
integrity of information and information systems while helping keep those
systems available to those who need them. But in DOGE's first few weeks of
existence, reports indicate that its staff appears to be ignoring those
principles and potentially making the federal government more vulnerable to
cyber incidents. ... Currently, the general public, federal agencies and
Congress have little idea who is tinkering with the government's critical
systems. DOGE's hiring process, including how it screens applicants for
technical, operational or cybersecurity competency, as well as experience in
government, is opaque. And journalists investigating the backgrounds of DOGE
employees have been intimidated by the acting U.S. attorney in Washington. DOGE
has hired young people fresh out of—or still in—college or with little or no
experience in government, but who reportedly have strong technical prowess. But
some have questionable backgrounds for such sensitive work. And one leading DOGE
staffer working at the Treasury Department has since resigned over a series of
racist social media posts. ... DOGE operatives are quickly developing and
deploying major software changes to very complex old systems and databases,
according to reports.
Australian businesses urged to help shape new data security framework
With the consultation process entering its final stages, businesses are
encouraged to take part in upcoming workshops or submit feedback online.
Workshops will take place in Sydney on Tuesday 18 February, Brisbane on
Wednesday 19 February, and Melbourne on Wednesday 26 February. For those unable
to attend, an online survey is available for businesses to provide their
insights. Key emphasised the significance of business participation in shaping
the framework. "This is the last chance to get involved in the industry
consultation," he said. "Workshops are taking place this month, but if people
can't attend, we'd love them to complete the survey online." The workshops will
be interactive, allowing participants to share their experiences with data
security, discuss their existing frameworks, and provide recommendations. ...
Without meaningful industry engagement, the framework risks being ineffective or
underutilised. Key warned that failing to gather input from businesses could
lead to a framework that does not meet their needs. "We essentially would be
creating an industry framework that industry may or may not actually utilise,"
he said. "This is really designed for industry, and we need that kind of input
from industry for it to work for them."
Can AI Early Warning Systems Reboot the Threat Intel Industry?

AI platforms learn how multiple campaigns connect, which malicious tools get
repeated, and how often threat actors pivot to new malicious infrastructure and
domains. That kind of cross-campaign insight is gold for defenders, especially
when the data is available in real time. Of course, adversaries won’t line up to
feed their best secrets to OpenAI, Microsoft or Google AI platforms. Some hacker
groups prefer open-source models, hosting them on private servers where there’s
zero chance of being monitored. As these open-source models gain sophistication,
criminals can test or refine their attacks without Big Tech breathing down their
necks but the lure of advanced online models with powerful capabilities will be
hard to avoid. Even as security experts remain bullish on the power of AI to
save threat intel, there are adversarial concerns at play. Some warn that
attackers can poison AI systems, manipulate data to produce false negatives, or
exploit generative models for their own malicious scripts. But as it stands, the
big AI platforms already see more malicious signals in a day than any single
cybersecurity vendor sees in a year. That scale is exactly what’s been missing
from threat intelligence. For all the talk about “community sharing” and open
exchanges, it’s always been a tangled mess.
Security validation: The new standard for cyber resilience

Stolen credentials are a goldmine for attackers. According to Verizon’s 2024
Data Breach Investigations Report (DBIR), compromised credentials account for
31% of breaches over the past decade and 77% of web application attacks. The
Colonial Pipeline attack in 2021 is a stark reminder of the damage that can
result from leaked credentials—attackers gained access to the company’s VPN
using credentials found on the dark web. Security validation makes it easy to
test for credential-related risks. ... One of the most significant benefits of
security validation is its ability to provide evidence-based guidance for
remediation. Rather than adopting a “patch everything” approach, teams can focus
on the most critical fixes based on real exploitability risk and system impact.
... Traditional security metrics, such as the number of vulnerabilities patched
or the percentage of endpoints with antivirus software, only tell part of the
story. Security validation offers a fresh perspective by measuring your posture
based on emulated attacks. This shift from reactive to proactive security
management is essential in today’s ever-changing threat landscape. By safely
emulating real-world attacks in live environments, security validation ensures
that your controls can detect, block, and respond to threats before damage
occurs.
Cyber insurance is no silver bullet for cybersecurity

Cyber insurance is designed to minimise organisations’ financial losses from
cyber incidents by covering costs like breach notification, data restoration,
legal fees, and even ransomware payments. Insurers evaluate an organisation’s
security posture by assessing the implementation of specific security controls.
... Despite its potential, research reveals that cyber insurance falls short in
improving security practices. A report by the Royal United Services Institute
(RUSI) think tank points out that cyber insurance policies often lack
standardisation and fail to incentivise organisations to adopt security
practices aligned with frameworks like ISO 27001 or NIST CSF. Another study
emphasises that insurance requirements may be motivated by various other factors
(eg, controls that reduce very specific risks, length of policy period, liable
risks) rather than improving overall organisational security in a meaningful
way. Not only does this gap weaken the argument for cyber insurance improving
security, it also poses a risk for businesses. Organisations meeting insurance
requirements (which may be minimal in terms of security) may mistakenly believe
they are well-protected, only to find themselves vulnerable to attacks that
exploit overlooked weaknesses.
The Metamorphosis of Open Source: An Industry in Transition
The rise of artificial intelligence has introduced a new topic to the open
source conversation. Unlike traditional software, AI systems include both code
and models, data, and training methods, creating complexities that existing open
source licenses were not designed to address. Recognizing this gap, the OSI
launched the Open Source AI Definition (OSAID) in 2024, marking a pivotal moment
in the evolution of open source principles. OSAID v1.0 defines the essential
freedoms for AI systems: the rights to use, study, modify, and share AI
technologies without restriction. This framework aims to ensure that AI systems
labeled as “open source” align with the core values of transparency and
collaboration underpinning the movement. However, the journey has not been
without challenges. The OSI’s definition has sparked debates, particularly
around the legal ambiguities of model weights and data licensing. For instance,
while OSAID emphasizes transparency in data sources and methodologies, it does
not resolve whether model weights derived from unlicensed data can be freely
shared or used commercially. This has left businesses and developers navigating
a gray area, where the practical adoption of open source AI models requires
careful legal scrutiny.
No comments:
Post a Comment