Quote for the day:
"Your problem isn't the problem. Your
reaction is the problem." -- Anonymous
One of the most significant challenges in modern digital forensics, both in the
corporate sector and law enforcement, is the abundance of data. Due to
increasing digital storage capacities, even mobile devices today can accumulate
up to 1TB of information. ... Digital forensics started benefiting from AI
features a few years ago. The first major development in this regard was the
implementation of neural networks for picture recognition and categorization.
This powerful tool has been instrumental for forensic examiners in law
enforcement, enabling them to analyze pictures from CCTV and seized devices more
efficiently. It significantly accelerated the identification of persons of
interest and child abuse victims as well as the detection of case-related
content, such as firearms or pornography. ... No matter how advanced, AI
operates within the boundaries of its training, which can sometimes be
incomplete or imperfect. Large language models, in particular, may produce
inaccurate information if their training data lacks sufficient detail on a given
topic. As a result, investigations involving AI technologies require human
oversight. In DFIR, validating discovered evidence is standard practice. It is
common to use multiple digital forensics tools to verify extracted data and
manually check critical details in source files.
Implementing a payment ban is not without challenges. In the short term,
retaliatory attacks are a real possibility as cybercriminals attempt to
undermine the policy. However, given the prevalence of targets worldwide, I
believe most criminal gangs will simply focus their efforts elsewhere. The
government’s resolve would certainly be tested if payment of a ransom was seen
as the only way to avoid public health data being leaked, energy networks being
crippled, or preventing a CNI organization from going out of business. In such
cases, clear guidelines as well as technical and financial support mechanisms
for affected organizations are essential. Policy makers must develop playbooks
for such scenarios and run education campaigns that raise awareness about the
policy’s goals, emphasizing the long-term benefits of standing firm against
ransom demands. That said, increased resilience—both technological and
organizational—are integral to any strategy. Enhanced cybersecurity measures are
critical, in particular a zero trust strategy that reduces an organization’s
attack surface and stops hackers from being able to move laterally in the
network. The U.S. federal government has already committed to move to zero trust
architectures.
Why is building a data-driven culture incredibly hard? Because it calls for a
behavioral change across the organization. This work is neither easy nor quick.
To better appreciate the scope of this challenge, let’s do a brief thought
exercise. Take a moment to reflect on these questions: How involved are your
leaders in championing and directly following through on data-driven
initiatives? Do you know whether your internal stakeholders are all equipped and
empowered to use data for all kinds of decisions, strategic or tactical? Does
your work environment make it easy for people to come together, collaborate with
data, and support one another when they’re making decisions based on the
insights? Does everyone in the organization truly understand the benefits of
using data, and are success stories regularly shared internally to inspire
people to action? If your answers to these questions are “I’m not sure” or
“maybe,” you’re not alone. Most leaders assume in good faith that their
organizations are on the right path. But they struggle when asked for concrete
examples or data-backed evidence to support these gut-feeling assumptions. The
leaders’ dilemma becomes even more clear when you consider that the elements at
the core of the four questions above — leadership intervention, data
empowerment, collaboration, and value realization — are inherently qualitative.
Most organizational metrics or operational KPIs don’t capture them today.
Scaling product centricity in an organization is like walking a tightrope.
Leaders must drive change while maintaining smooth operations. This requires
forming cross-functional teams, outcome-based evaluation and navigating multiple
operating models. As a CIO, balancing change while facing the internal
resistance of a risk-averse, siloed business culture can feel like facing a
strong wind on a high wire. ...The key to overcoming this is to demonstrate the
benefits of a product-centric approach incrementally, proving its value until it
becomes the norm. To prevent cultural resistance from derailing your vision for
a more agile enterprise, leverage multiple IT operating models with a service or
value orientation to meet the ambitious expectations of CEOs and boards. Engage
the C-suite by taking a holistic view of how democratized IT can be used to meet
stakeholder expectations. Every organization has a business and enterprise
operating model to create and deliver value. A business model might focus on
manufacturing products that delight customers, requiring the IT operating model
to align with enterprise expectations. This alignment involves deciding whether
IT will merely provide enabling services or actively partner in delivering
external products and services.
"As the role of the CISO grows more complex and critical to organisations,
CISOs must be able to balance security needs with business goals, culture, and
articulate the value of security investments." She highlights the importance
of strong relationships across departments and stakeholders in bolstering
cybersecurity and privacy programmes. The study further discusses the positive
impact of having board members with a cybersecurity background. These members
foster stronger relationships with security teams and have more confidence in
their organisation's security stance. For instance, boards with a CISO member
report higher effectiveness in setting strategic cybersecurity goals and
communicating progress, compared to boards without such expertise. CISOs with
robust board relationships report improved collaboration with IT operations
and engineering, allowing them to explore advanced technologies like
generative AI for enhanced threat detection and response. However, gaps
persist in priority alignment between CISOs and boards, particularly around
emerging technologies, upskilling, and revenue growth. Expectations for CISOs
to develop leadership skills add complexity to their role, with many
recognising a gap in business acumen, emotional intelligence, and
communication.
Researchers at the University of Waterloo's Cheriton School of Computer Science,
led by Professor Martin Karsten and including Peter Cai, identified
inefficiencies in network traffic processing for communications-heavy server
applications. Their solution, which involves rearranging operations within the
Linux networking stack, has shown improvements in both performance and energy
efficiency. The modification, presented at an industry conference, increases
throughput by up to 45 percent in certain situations without compromising tail
latency. Professor Karsten likened the improvement to optimizing a manufacturing
plant's pipeline, resulting in more efficient use of data center CPU caches.
Professor Karsten collaborated with Joe Damato, a distinguished engineer at
Fastly, to develop a non-intrusive kernel change consisting of just 30 lines of
code. This small but impactful modification has the potential to reduce energy
consumption in critical data center operations by as much as 30 percent. Central
to this innovation is a feature called IRQ (interrupt request) suspension, which
balances CPU power usage with efficient data processing. By reducing unnecessary
CPU interruptions during high-traffic periods, the feature enhances network
performance while maintaining low latency during quieter times.
While the credential helper is designed to return a message containing the
credentials that are separated by the newline control character ("\n"), the
research found that GitHub Desktop is susceptible to a case of carriage return
("\r") smuggling whereby injecting the character into a crafted URL can leak the
credentials to an attacker-controlled host. "Using a maliciously crafted URL
it's possible to cause the credential request coming from Git to be
misinterpreted by Github Desktop such that it will send credentials for a
different host than the host that Git is currently communicating with thereby
allowing for secret exfiltration," GitHub said in an advisory. A similar
weakness has also been identified in the Git Credential Manager NuGet package,
allowing for credentials to be exposed to an unrelated host. ... "While both
enterprise-related variables are not common, the CODESPACES environment
variable is always set to true when running on GitHub Codespaces," Ry0taK said.
"So, cloning a malicious repository on GitHub Codespaces using GitHub CLI will
always leak the access token to the attacker's hosts." ... In response to the
disclosures, the credential leakage stemming from carriage return smuggling has
been treated by the Git project as a standalone vulnerability (CVE-2024-52006,
CVSS score: 2.1) and addressed in version v2.48.1.
What's excellent about no-code is that you can build a platform that won't
require your customers to be development professionals — but will allow
customization. That's the best approach: create a blank canvas for people, and
they will take it from there. Whether it's surveys, invoices, employee records,
or something completely different, developers have the tools to make it visually
appealing to your customers, making it more intuitive for them. I also want to
break the myth that no code doesn't allow effective data management. It is
possible to create a no-code platform that will empower users to perform complex
mathematical operations seamlessly and to support managing interrelated data.
This means users' applications will be more robust than their competitors and
produce more meaningful insights. ... As a developer, I am passionate about
evolving tech and our industry's challenges. I am also highly aware of people's
concerns over the security of many no-code solutions. Security is a critical
component of any software; no-code solutions are no exception. One-off custom
software builds do not typically undergo the same rigorous security testing as
widely used commercial software due to the high cost and time involved. This
leaves them vulnerable to security breaches.
The development of appropriate skills and capabilities has emerged as a critical
challenge, ranking as a pressing concern in advancing digital operations. The
talent shortage is most acute in North America and the media industry, where
fierce competition for skilled professionals coincides with accelerating digital
transformation initiatives. Organizations face a dual challenge: upskilling
existing staff while competing for scarce talent in an increasingly competitive
market. The report suggests this skills gap could potentially slow the adoption
of new technologies and hamper operational advancement if not adequately
addressed. "The rapid evolution of how AI is being applied to many parts of jobs
to be done is unmatched," Armandpour said. "Raising awareness, educating, and
fostering a rich learning environment for all employees is essential." ...
"Service outages today can have a much greater impact due to the
interdependencies of modern IT architectures, so security is especially
critical," Armandpour said. "Organizations need to recognize security as a
critical business imperative that helps power operational resilience, customer
trust, and competitive advantage." What sets successful organizations apart is
the prioritization of defining robust security requirements upfront and
incorporating security-by-design into product development cycles.
In fact, one big risk right now is how dependent developers are becoming on LLMs
to do their thinking for them. I’ve argued that LLMs help senior developers more
than junior developers, precisely because more experienced developers know when
an LLM-driven coding assistant is getting things wrong. They use the LLM to
speed up development without abdicating responsibility for that development.
Junior developers can be more prone to trusting LLM output too much and don’t
know when they’re being given good code or bad. Even for experienced engineers,
however, there’s a risk of entrusting the LLM to do too much. For example, Mike
Loukides of O’Reilly Media went through their learning platform data and found
developers show “less interest in learning about programming languages,” perhaps
because developers may be too “willing to let AI ‘learn’ the details of
languages and libraries for them.” He continues, “If someone is using AI to
avoid learning the hard concepts—like solving a problem by dividing it into
smaller pieces (like quicksort)—they are shortchanging themselves.” Short-term
thinking can yield long-term problems. As noted above, more experienced
developers can use LLMs more effectively because of experience. If a developer
offloads learning for quick-fix code completion at the long-term cost of
understanding their code, that’s a gift that will keep on taking.
No comments:
Post a Comment