Cybersecurity spending trends for 2022: Investing in the future
Despite the steady state of funding, CISOs aren’t going to be flush with cash.
Security leaders and executive advisors say security departments must continue
to show that they’re delivering value for the dollars spent, maturing their
operations, and, ultimately, improving their organization’s security posture.
“Organizations know that risks are increasing every day, and as such,
investments continue to pour into cybersecurity,” says Joe Nocera, leader of
PwC’s Cyber & Privacy Innovation Institute. “We’re hearing from business
leaders that they’d be willing to spend anything to not end up on the front page
of a newspaper for a hack, but they don’t want to spend a penny more than is
necessary and they want to make sure they’re spending their money in the right
areas. That’s going to require the CEO and CISOs to work together. CISOs need to
know what the right level of protection is.” Nocera adds: “Cyber investments are
becoming less about having the latest products from tech vendors and more about
first understanding where the business is most vulnerable, then prioritizing
investments by how likely an attack will occur and how substantial that loss
could be to the business.”
Why CISOs Shouldn’t Report to CIOs in the C-Suite
A very common complaint I hear from CISOs is that they do not receive the
resources they need to secure their enterprises. While some companies understand
how and where the CISO fits into the leadership structure, the majority do not.
One individual that works for a local government told me he took a position as a
CIO rather than a CISO because he “knew the CISO role was that of a fall guy.”
He believes he was only offered the CISO position because the CIO wanted someone
to blame if things went badly. This example clearly shows the conflict of
interest that exists when a CISO reports to a CIO. One CISO working in the
industrial market told me that there’s an “inherent tension between me and
others that report to the CIO.” This frequently occurs due to the trade-off
between security and efficiency, which impacts business units throughout an
enterprise. When manufacturing wants to continue running a legacy system with
outdated software and the CISO says no, this impacts revenue.
Why Do We Need An Agile Finance Transformation
Embracing agility strategically and tactically while encouraging a fail-fast
environment ensures teams have adaptable processes, collaborative mindsets,
and a bias for continuous improvement. An agile finance function is prepared
to provide assurance for financial results and contribute to strategic
decisions in the face of evolving market conditions, the accelerated pace of
change, and the introduction of unforeseeable circumstances. CFOs,
controllers, finance and accounting professionals, and students alike are,
therefore, encouraged to develop agile and scrum expertise to elevate
individual, functional, and organizational performance, further strengthening
the finance function’s value proposition for decades to come. Utilizing agile
and scrum to redefine approaches to core activities like financial planning
and analysis, internal audit, and financial close can position management
accountants to better support the unprecedented number of transformation
initiatives organizations embark upon today. Further, the agile finance
function can realize elevated outcomes, maximized value, and expedited
delivery, enabling their organizations to adapt to changing priorities with
agility and data-backed insights.
Mozilla patches critical “BigSig” cryptographic bug: Here’s how to track it down and fix it
Many software vendors rely on third-party open source cryptographic tools,
such as OpenSSL, or simply hook up with the cryptographic libraries built into
the operating system itself, such as Microsoft’s Secure Channel on Windows or
Apple’s Secure Transport on macOS and iOS. But Mozilla has always used its own
cryptographic library, known as NSS, short for Network Security Services,
instead of relying on third-party or system-level code. Ironically, this bug
is exposed when affected applications set out to test the cryptographic
veracity of digital signatures provided by the senders of content such as
emails, PDF documents or web pages. In other words, the very act of protecting
you, by checking up front whether a user or website you’re dealing with is an
imposter …could, in theory, lead to you getting hacked by said user or
website. As Ormandy shows in his bug report, it’s trivial to crash an
application outright by exploiting this bug, and not significantly more
difficult to perform what you might call a “controlled crash”, which can
typically be wrangled into an RCE, short for remote code execution.
Zero Trust Shouldn’t Mean Zero Trust in Employees
An effective zero trust experience works for and empowers the employee. To
them, everything feels the same — whether they're accessing their email, a
billing platform, or the HR app. In the background, they don't have broad
access to apps and data that they don't need. This comes down to building a
well-defined and measurable "circle of trust" that is granted to an employee
based on their role and team. With these guardrails in place, you're removing
the friction and providing a good user experience while establishing more
effective security. Security teams must be able to clearly and reliably
enforce a trust boundary that's extended to employees based on what they need
to get their jobs done. From there, zero trust is about building out those
guardrails so that the trust boundary is maintained. No more, no less. Zero
trust should be implemented across the entire HR life cycle, especially when
staffing shortages and the Great Resignation have caused hiring and turnover
fluctuations.
Understanding Black Box Testing - Types, Techniques, and Examples
To ensure that the software quality is maintained and you do not lose
customers because of a bad user experience, your application should go through
stern supervision using suitable testing techniques. Black box testing is the
easiest and fastest solution to investigate the software functionalities
without any coding knowledge. The debate on white box vs. black-box testing is
an ever-prevailing discussion, where both stand out as winners. Whether you
want White box testing or Black box testing depends upon how deeper you want
to get into the software structure under test. If you want to test the
functionalities with an end-user perspective, Black box testing fits the bill.
And, if you wish to direct your testing efforts towards how the software is
built, its coding structure, and design, then white box testing works well.
However, both aim to improve the software quality in their own different ways.
There are a lot of black-box testing techniques discussed above.
CIO priorities: 10 challenges to tackle in 2022
From robotic process automation to low-code technologies, there's a whole
suite of tools that claim to make the application development process easier.
However, automation should come with a warning: while these tools can lighten
the day-to-day load for IT teams, someone somewhere must ensure that new
applications meet stringent reliability and security standards. Increased
automation will mean IT professionals spend more time engaging and overseeing,
so focus on training and development to ensure your staff is ready for a shift
in responsibility. With all the talk of automation and low-code development,
it would be easy to assume that the traditional work of the IT department is
done. Nothing could be further from the truth. Yes, the tech team is set to
change, but talented developers – who work alongside their business peers –
remain a valuable and highly prized commodity. To attract and retain IT staff,
CIOs will need to think very hard about the opportunities they offer. Rather
than being a place to go, work is going to become an activity you do in a
collaborative manner, regardless of location.
Cloud numbers don’t add up
The problem is aligning ambition with reality. It’s perhaps also a weirdness
in the definition of “cloud native.” The Cloud Native Computing Foundation
defines “cloud native” as enabling enterprises to “build and run scalable
applications in modern, dynamic environments such as public, private, and
hybrid clouds.” There’s nothing particularly modern about a private cloud/data
center. Scott Carey has described it thus: “Cloud native encompasses the
various tools and techniques used by software developers today to build
applications for the public cloud, as opposed to traditional architectures
suited to an on-premises data center” (emphasis mine). If going cloud native
simply means “doing what we’ve always done, but sprinkled with containers,”
that’s not a very useful data point. “Cloud first,” however, arguably is. If
we’re already at 47% of respondents saying they default to cloud (again, my
assumption is that people weren’t thinking “my private data center” when
answering a question about “cloud first”), then we have a real problem with
measured spend on cloud computing from IDC, Gartner, and even the most
wide-eyed of would-be analyst firms.
The Dark Web: a cyber crime bazaar where data is a hot commodity
Everyone is aware of the Dark Web’s reputation as a playground for cyber
criminals who anonymously trade stolen data and partake in illegal activities.
While in the past it required a degree of technical knowledge to transact on
the Dark Web, in recent years the trading of malware and stolen data has
become increasingly commoditised. As a result, marketplaces, hacker forums and
ransomware groups sites are proliferating. Bitglass recently conducted
some research that shines some light on exactly how Dark Web activity, the
value of stolen data, and cyber criminal behaviours have rapidly evolved in
recent years. What we found should trigger alarm bells for enterprises that
want to prevent their sensitive data from ending up on the Dark Web. Back in
2015, Bitglass conducted the world’s first data tracking experiment to
identify exactly how data is viewed and accessed on the Dark Web. This year we
re-ran the experiment and embellished it, posting fake account usernames,
emails and passwords that would supposedly give access to high-profile social
media, retail, gaming, crypto and pirated content networks acquired through
well-known breaches.
Disaster preparedness: 3 key tactics for IT leaders
Once risks are identified and impacts are evaluated and scored, implement an
appropriate risk response. This includes risk treatment options to accept the
risk, mitigate the risk with new or existing controls, transfer the risk to
third parties – often with insurance or risk sharing, or avoid the risk by
ceasing the business activity related to it. A risk assessment can be coupled
with a business impact analysis (BIA) that provides input into business
continuity and disaster planning. A BIA identifies recovery time objectives
(RTOs), recovery point objectives (RPOs), critical processes, dependence on
critical systems, and many other areas. It gets to the 80/20 rule where rather
than create costly recovery strategies for 100 percent of all critical
business functions, you want to focus on the 20 percent of the business
processes that are the most critical and need to be recovered quickly in a
disaster event. Once a BIA is completed, organizations can determine their
recovery strategies to maintain continuity of operations during a disaster.
Business continuity plans should be based on the BIA and updated at least
every year.
Quote for the day:
"Tact is the ability to make a person
see lightning without letting him feel the bolt." --
Orlando A. Battista
No comments:
Post a Comment