3 leadership lessons we can learn from ethical hackers
By nature, hackers possess a knack for looking beyond the obvious to find
what’s hidden. They leverage their ingenuity and resourcefulness to address
threats and anticipate future risks. And most importantly, they are unafraid
to break things to make them better. Likewise, when leading an organization,
you are often faced with problems that, from the outside, look unsurmountable.
You must handle challenges that threaten your internal culture or your product
roadmap, and it’s up to you to decide the right path toward progress. Now is
the most critical time to find those hidden opportunities to strengthen your
organization and remain fearless in your decisions toward a stronger path. ...
Leaders must remove ego and cultivate open communication within their
organizations. At HackerOne, we build accountability through company-wide
weekly Ask Me Anything (AMA) sessions to share organizational knowledge, ask
tough questions about the business, and encourage employees to share their
perspectives openly without fear of retaliation. ... Most hackers are
self-taught enthusiasts. Young and without formal cybersecurity training, they
are driven by a passion for their craft. Internal drive propels them to
continue their search for what others miss. If there is a way to see the gaps,
they will find them.
So, you don’t have a chief information security officer? 9 signs your company needs one
The cost to hire and retain a CISO is a major stumbling block for some
organizations. Even promoting someone from within to a newly created CISO post
can be expensive: total compensation for a full-time CISO in the US now averages
$565,000 per year, not including other costs that often come with filling the
position. ... Running cybersecurity on top of their own duties can be a tricky
balancing act for some CIOs, says Cameron Smith, advisory lead for cybersecurity
and data privacy at Info-Tech Research Group in London, Ontario. “A CIO has a
lot of objectives or goals that don’t relate to security, and those sometimes
conflict with one another. Security oftentimes can be at odds with certain
productivity goals. But both of those (roles) should be aimed at advancing the
success of the organization,” Smith says. ... A virtual CISO is one option
for companies seeking to bolster cybersecurity without a full-time CISO. Black
says this approach could make sense for companies trying to lighten the load of
their overburdened CIO or CTO, as well as firms lacking the size, budget, or
complexity to justify a permanent CISO. ... Not having a CISO in place could
cost your company business with existing clients or prospective customers who
operate in regulated sectors, expect their partners or suppliers to have a
rigorous security framework, or require it for certain high-level projects.
Most importantly, AI agents can bring advanced capabilities, including
real-time data analysis, predictive modeling, and autonomous decision-making,
available to a much wider group of people in any organization. That, in turn,
gives companies a way to harness the full potential of their data. Simply put,
AI agents are rapidly becoming essential tools for business managers and data
analysts in industrial businesses, including those in chemical production,
manufacturing, energy sectors, and more. ... In the chemical industry, AI
agents can monitor and control chemical processes in real time, minimizing
risks associated with equipment failures, leaks, or hazardous reactions. By
analyzing data from sensors and operational equipment, AI agents can predict
potential failures and recommend preventive maintenance actions. This reduces
downtime, improves safety, and enhances overall production efficiency. ... AI
agents enable companies to make smarter, faster, and more informed decisions.
From predictive maintenance to real-time process optimization, these agents
are delivering tangible benefits across industries. For business managers and
data analysts, the key takeaway is clear: AI agents are not just a future
possibility—they are a present necessity, capable of driving efficiency,
innovation, and growth in today’s competitive industrial environment.
Want to Modernize Your Apps? Start By Modernizing Your Software Delivery Processes
A healthier approach to app modernization is to focus on modernizing your
processes. Despite momentous changes in application deployment technology over
the past decade or two, the development processes that best drive software
innovation and efficiency — like the interrelated concepts and practices of
agile, continuous integration/continuous delivery (CI/CD) and DevOps — have
remained more or less the same. This is why modernizing your application
delivery processes to take advantage of the most innovative techniques should
be every business’s real focus. When your processes are modern, your ability
to leverage modern technology and update apps quickly to take advantage of new
technology follows naturally. ... In addition to modifying processes
themselves, app modernization should also involve the goal of changing the way
organizations think about processes in general. By this, I mean pushing
developers, IT admins and managers to turn to automation by default when
implementing processes. This might seem unnecessary because plenty of IT
professionals today talk about the importance of automation. Yet, when it
comes to implementing processes, they tend to lean toward manual approaches
because they are faster and simpler to implement initially.
The ‘Great IT Rebrand’: Restructuring IT for business success
To champion his reimagined vision for IT, BBNI’s Nester stresses the art of
effective communication and the importance of a solid marketing campaign. In
partnership with corporate communications, Nester established the
Techniculture brand and lineup of related events specifically designed to
align technology, business, and culture in support of enterprise goals.
Quarterly Techniculture town hall meetings anchored by both business and
technology leaders keep the several hundred Technology Solutions team members
abreast of business priorities and familiar with the firm’s money-making
mechanics, including a window into how technology helps achieve specific
revenue goals, Nester explains. “It’s a can’t-miss event and our largest team
engagement — even more so than the CEO videos,” he contends. The next pillar
of the Techniculture foundation is Techniculture Live, an annual leadership
summit. One third of the Technology Solutions Group, about 250 teammates by
Nester’s estimates, participate in the event, which is not a deep dive into
the latest technologies, but rather spotlights business performance and
technology initiatives that have been most impactful to achieving corporate
goals.
The Role of DSPM in Data Compliance: Going Beyond CSPM for Regulatory Success
DSPM is a data-focused approach to securing the cloud environment. By
addressing cloud security from the angle of discovering sensitive data, DSPM
is centered on protecting an organization’s valuable data. This approach helps
organizations discover, classify, and protect data across all platforms,
including IaaS, PaaS, and SaaS applications. Where CSPM is focused on finding
vulnerabilities and risks for teams to remediate across the cloud environment,
DSPM “gives security teams visibility into where cloud data is stored” and
detects risks to that data. Security misconfigurations and vulnerabilities
that may result in the exposure of data can be flagged by DSPM solutions for
remediation, helping to protect an organization’s most sensitive resources.
Beyond simply discovering sensitive data, DSPM solutions also address many
questions of data access and governance. They provide insight into not only
where sensitive data is located, but which users have access to it, how it is
used, and the security posture of the data store. ... Every organization
undoubtedly has valuable and sensitive enterprise, customer, and employee data
that must be protected against a wide range of threats. Organizations can reap
a great deal of benefits from DSPM in protecting data that is not stored
on-premises.
The hidden challenges of AI development no one talks about
Currently, AI developers spend too much of their time (up to 75%) with the
"tooling" they need to build applications. Unless they have the technology to
spend less time tooling, these companies won't be able to scale their AI
applications. To add to technical challenges, nearly every AI startup is
reliant on NVIDIA GPU compute to train and run their AI models, especially at
scale. Developing a good relationship with hardware suppliers or cloud
providers like Paperspace can help startups, but the cost of purchasing or
renting these machines quickly becomes the largest expense any smaller company
will run into. Additionally, there is currently a battle to hire and keep AI
talent. We've seen recently how companies like OpenAI are trying to poach
talent from other heavy hitters like Google, which makes the process for
attracting talent at smaller companies much more difficult. ... Training a
Deep Learning model is almost always extremely expensive. This is a result of
the combined function of resource costs for the hardware itself, data
collection, and employees. In order to ameliorate this issue facing the
industry's newest players, we aim to achieve several goals for our users:
Creating an easy-to-use environment, introducing an inherent replicability
across our products, and providing access at as low costs as possible.
Transforming code scanning and threat detection with GenAI
The complexity of software components and stacks can sometimes be
mind-bending, so it is imperative to connect all these dots in as seamless and
hands-free a way as possible. ... If you’re a developer with a mountain of
feature requests and bug fixes on your plate and then receive a tsunami of
security tickets that nobody’s incentivized to care about… guess which ones
are getting pushed to the bottom of the pile? Generative AI-based agentic
workflows are sparking the flames of cybersecurity and engineering teams alike
to see the light at the end of the tunnel and consider the possibility that
SSDLC is on the near-term horizon. And we’re seeing some promising changes
already today in the market. Imagine having an intelligent assistant that can
automatically track issues, figure out which ones matter most, suggest fixes,
and then test and validate those fixes, all at the speed of computing! We
still need our developers to oversee things and make the final calls, but the
software agent swallows most of the burden of running an efficient program.
... AI’s evolution in code scanning fundamentally reshapes our approach to
security. Optimized generative AI LLMs can assess millions of lines of code in
seconds and pay attention to even the most subtle and nuanced set of patterns,
finding the needle in a haystack, which is almost always by humans.
5 Tips for Optimizing Multi-Region Cloud Configurations
Multi-region cloud configurations get very complicated very quickly,
especially for active-active environments where you’re replicating data
constantly. Containerized microservice-based applications allow for faster
startup times, but they also drive up the number of resources you’ll need.
Even active-passive environments for cold backup-and-restore use cases are
resource-heavy. You’ll still need a lot of instances, AMI IDs, snapshots, and
more to achieve a reasonable disaster recovery turnaround time. ... The CAP
theorem forces you to choose only two of the three options: consistency,
availability, and partition tolerance. Since we’re configuring for
multi-region, partition tolerance is non-negotiable, which leaves a battle
between availability and consistency. Yes, you can hold onto both, but you’ll
drive high costs and an outsized management burden. If you’re running
active-passive environments, opt for consistency over availability. This
allows you to use Platform-as-a-Service (PaaS) solutions to replicate your
database to your passive region. ... For active-passive environments, routing
isn’t a serious concern. You’ll use default priority global routing to support
failover handling, end of story. But for active-active environments, you’ll
want different routing policies depending on the situation in that region.
Why API-First Matters in an AI-Driven World
Implementing an API-first approach at scale is a nontrivial exercise. The
fundamental reason for this is that API-first involves “people.” It’s central
to the methodology that APIs are embraced as socio-technical assets, and
therefore, it requires a change in how “people,” both technical and
non-technical, work and collaborate. There are some common objections to
adopting API-First within organizations that raise their head, as well as some
newer framings, given the eagerness of many to participate in the AI-hyped
landscape. ... Don’t try to design for all eventualities. Instead, follow good
extensibility patterns that enable future evolution and design “just enough”
of the API based on current needs. There are added benefits when you combine
this tactic with API specifications, as you can get fast feedback loops on
that design before any investments are made in writing code or creating test
suites. ... An API-First approach is powerful precisely because it starts with
a use-case-oriented mindset, thinking about the problem being solved and how
best to present data that aligns with that solution. By exposing data
thoughtfully through APIs, companies can encapsulate domain-specific
knowledge, apply business logic, and ensure that data is served securely,
self-service, and tailored to business needs.
Quote for the day:
"Difficulties in life are intended to
make us better, not bitter." -- Dan Reeves
No comments:
Post a Comment