Quote for the day:
"Law of Leadership: A successful team with 100 members has 100 leaders." -- Lance Secretan
Citizen Development: The Wrong Strategy for the Right Problem

The latest generation of citizen development offenders are the low-code and
no-code platforms that promise to democratize software development by enabling
those without formal programming education to build applications. These
platforms fueled enthusiasm around speedy app development — especially among
business users — but their limitations are similar to the generations of
platforms that came before. ... Don't get me wrong — the intentions behind
citizen development come from a legitimate place. More often than not, IT needs
to deliver faster to keep up with the business. But these tools promise more
than they can deliver and, worse, usually result in negative unintended
consequences. Think of it as a digital house of cards, where disparate apps
combine to create unscalable systems that can take years and/or millions of
dollars to fix. ... Struggling to keep up with business demands is a common
refrain for IT teams. Citizen development has attempted to bridge the gap, but
it typically creates more problems than solutions. Rather than relying on
workarounds and quick fixes that potentially introduce security risks and
inefficiency — and certainly rather than disintermediating IT — businesses
should embrace the power of GenAI to support their developers and ultimately to
make IT more responsive and capable.
Researchers Test a Blockchain That Only Quantum Computers Can Mine

The quantum blockchain presents a path forward for reducing the environmental
cost of digital currencies. It also provides a practical incentive for deploying
early quantum computers, even before they become fully fault-tolerant or
scalable. In this architecture, the cost of quantum computing — not electricity
— becomes the bottleneck. That could shift mining centers away from regions with
cheap energy and toward countries or institutions with advanced quantum
computing infrastructure. The researchers also argue that this architecture
offers broader lessons. ... “Beyond serving as a proof of concept for a
meaningful application of quantum computing, this work highlights the potential
for other near-term quantum computing applications using existing technology,”
the researchers write. ... One of the major limitations, as mentioned, is cost.
Quantum computing time remains expensive and limited in availability, even as
energy use is reduced. At present, quantum PoQ may not be economically viable
for large-scale deployment. As progress continues in quantum computing, those
costs may be mitigated, the researchers suggest. D-Wave machines also use
quantum annealing — a different model from the quantum computing platforms
pursued by companies like IBM and Google.
Enterprise Risk Management: How to Build a Comprehensive Framework

Risk objects are the human capital, physical assets, documents and concepts
(e.g., “outsourcing”) that pose risk to an organization. Stephen Hilgartner, a
Cornell University professor, once described risk objects as “sources of danger”
or “things that pose hazards.” The basic idea is that any simple action, like
driving a car, has associated risk objects – such as the driver, the car and the
roads. ... After the risk objects have been defined, the risk management
processes of identification, assessment and treatment can begin. The goal of ERM
is to develop a standardized system that not only acknowledges the risks and
opportunities in every risk object but also assesses how the risks can impact
decision-making. For every risk object, hazards and opportunities must be
acknowledged by the risk owner. Risk owners are the individuals managerially
accountable for the risk objects. These leaders and their risk objects establish
a scope for the risk management process. Moreover, they ensure that all risks
are properly managed based on approved risk management policies. To complete all
aspects of the risk management process, risk owners must guarantee that risks
are accurately tied to the budget and organizational strategy.
Choosing consequence-based cyber risk management to prioritize impact over probability, redefine industrial security
Nonetheless, the biggest challenge for applying consequence-based cyber risk
management is the availability of holistic information regarding cyber
events and their outcomes. Most companies struggle to gauge the probable
damage of attacks based on inadequate historical data or broken-down
information systems. This has led to increased adoption of analytics and
threat intelligence technologies to enable organizations to simulate the
‘most likely’ outcome of cyber-attacks and predict probable situations. ...
“A winning strategy incorporates prevention and recovery. Proactive steps
like vulnerability assessments, threat hunting, and continuous monitoring
reduce the likelihood and impact of incidents,” according to Morris.
“Organizations can quickly restore operations when incidents occur with
robust incident response plans, disaster recovery strategies, and regular
simulation exercises. This dual approach is essential, especially amid
rising state-sponsored cyberattacks.” ... “To overcome data limitations,
organizations can combine diverse data sources, historical incident records,
threat intelligence feeds, industry benchmarks, and expert insights, to
build a well-rounded picture,” Morris detailed. “Scenario analysis and
qualitative assessments help fill in gaps when quantitative data is sparse.
Engaging cross-functional teams for continuous feedback ensures these models
evolve with real-world insights.”
The CTO vs. CMO AI power struggle - who should really be in charge?

An argument can be made that the CTO should oversee everything technical,
including AI. Your CTO is already responsible for your company's technology
infrastructure, data security, and system reliability, and AI directly
impacts all these areas. But does that mean the CTO should dictate what AI
tools your creative team uses? Does the CTO understand the fundamentals of
what makes good content or the company's marketing objectives? That sounds
more like a job for your creative team or your CMO. On the other hand, your
CMO handles everything from brand positioning and revenue growth to customer
experiences. But does that mean they should decide what AI tools are used
for coding or managing company-wide processes or even integrating company
data? You see the problem, right? ... Once a tool is chosen, our CTO will
step in. They perform their due diligence to ensure our data stays secure,
confidential information isn't leaked, and none of our secrets end up on the
dark web. That said, if your organization is large enough to need a
dedicated Chief AI Officer (CAIO), their role shouldn't be deciding AI tools
for everyone. Instead, they're a mediator who connects the dots between
teams.
Why Cyber Quality Is the Key to Security
_Federico_Caputo_Alamy.jpg?width=1280&auto=webp&quality=95&format=jpg&disable=upscale)
To improve security, organizations must adopt foundational principles and
assemble teams accountable for monitoring safety concerns. Cyber resilience
and cyber quality are two pillars that every institution — especially
at-risk ones — must embrace. ... Do we have a clear and tested cyber
resilience plan to reduce the risk and impact of cyber threats to our
business-critical operations? Is there a designated team or individual
focused on cyber resilience and cyber quality? Are we focusing on long-term
strategies, targeted at sustainable and proactive solutions? If the answer
to any of these questions is no, something needs to change. This is where
cyber quality comes in. Cyber quality is about prioritization and
sustainable long-term strategy for cyber resilience, and is focused on
proactive/preventative measures to ensure risk mitigation. This principle is
not a marked checkbox on controls that show very little value in the long
run. ... Technology alone doesn't solve cybersecurity problems — people are
the root of both the challenges and the solutions. By embedding cyber
quality into the core of your operations, you transform cybersecurity from a
reactive cost center into a proactive enabler of business success.
Organizations that prioritize resilience and proactive governance will not
only mitigate risks but thrive in the digital age.
ISO 27001: Achieving data security standards for data centers

Achieving ISO 27001 certification is not an overnight process. It’s a
journey that requires commitment, resources, and a structured approach in
order to align the organization’s information security practices with the
standard’s requirements. The first step in the process is conducting a
comprehensive risk assessment. This assessment involves identifying
potential security risks and vulnerabilities in the data center’s
infrastructure and understanding the impact these risks might have on
business operations. This forms the foundation for the ISMS and determines
which security controls are necessary. ... A crucial, yet often overlooked,
aspect of ISO 27001 compliance is the proper destruction of data. Data
centers are responsible for managing vast amounts of sensitive information
and ensuring that data is securely sanitized when it is no longer needed is
a critical component of maintaining information security. Improper data
disposal can lead to serious security risks, including unauthorized access
to confidential information and data breaches. ... Whether it's personal
information, financial records, intellectual property, or any other type of
sensitive data, the potential risks of improper disposal are too great to
ignore. Data breaches and unauthorized access can result in significant
financial loss, legal liabilities, and reputational damage.
Understanding code smells and how refactoring can help

Typically, code smells stem from a failure to write source code in
accordance with necessary standards. In other cases, it means that the
documentation required to clearly define the project's development standards
and expectations was incomplete, inaccurate or nonexistent. There are many
situations that can cause code smells, such as improper dependencies between
modules, an incorrect assignment of methods to classes or needless
duplication of code segments. Code that is particularly smelly can
eventually cause profound performance problems and make business-critical
applications difficult to maintain. It's possible that the source of a code
smell may cause cascading issues and failures over time. ... The best time
to refactor code is before adding updates or new features to an application.
It is good practice to clean up existing code before programmers add any new
code. Another good time to refactor code is after a team has deployed code
into production. After all, developers have more time than usual to clean up
code before they're assigned a new task or a project. One caveat to
refactoring is that teams must make sure there is complete test coverage
before refactoring an application's code. Otherwise, the refactoring process
could simply restructure broken pieces of the application for no
gain.
Handling Crisis: Failure, Resilience And Customer Communication

Failure is something leaders want to reduce as much as they can, and it’s
possible to design products with graceful failure in mind. It’s also called
graceful degradation and can be thought of as a tolerance to faults or
faulting. It can mean that core functions remain usable as parts or
connectivity fails. You want any failure to cause as little damage or lack
of service as possible. Think of it as a stopover on the way to failing
safely: When our plane engines fail, we want them to glide, not plummet. ...
Resilience requires being on top of it all: monitoring, visibility, analysis
and meeting and exceeding the SLAs your customers demand. For service
providers, particularly in tech, you can focus on a full suite of telemetry
from the operational side of the business and decide your KPIs and OKRs. You
can also look at your customers’ perceptions via churn rate, customer
lifetime value, Net Promoter Score and so on. ... If you are to cope with
the speed and scale of potential technical outages, this is essential.
Accuracy, then speed, should be your priorities when it comes to
communicating about outages. The more of both, the better, but accuracy is
the most important, as it allows customers to make informed choices as they
manage the impact on their own businesses.
Approaches to Reducing Technical Debt in Growing Projects

Technical debt, also known as “tech debt,” refers to the extra work developers
incur by taking shortcuts or delaying necessary code improvements during
software development. Though sometimes these shortcuts serve a short-term goal —
like meeting a tight release deadline — accumulating too many compromises often
results in buggy code, fragile systems, and rising maintenance costs. ...
Massive rewrites can be risky and time-consuming, potentially halting your
roadmap. Incremental refactoring offers an alternative: focus on high-priority
areas first, systematically refining the codebase without interrupting ongoing
user access or new feature development. ... Not all parts of your application
contribute to technical debt equally. Concentrate on elements tied directly to
core functionality or user satisfaction, such as payment gateways or account
management modules. Use metrics like defect density or customer support logs to
identify “hotspots” that accumulate excessive technical debt. ... Technical debt
often creeps in when teams skip documentation, unit tests, or code reviews to
meet deadlines. A clear “definition of done” helps ensure every feature meets
quality standards before it’s marked complete.
No comments:
Post a Comment