Quote for the day:
"When you stop chasing the wrong things
you give the right things a chance to catch you." -- Lolly Daskal

Because agentic AI can complete complex workflows rather than simply generating
content, it opens the door to a variety of AI-assisted use cases in software
development that extend far beyond writing code — which, to date, has been the
main way that software developers have leveraged AI. ... But agentic AI
eliminates the need to spell out instructions or carry out manual actions
entirely. With just a sentence or two, developers can prompt AI to perform
complex, multi-step tasks. It's important to note that, for the most part,
agentic AI use cases like those described above remain theoretical. Agentic AI
remains a fairly new and quickly evolving field. The technology to do the sorts
of things mentioned here theoretically exists, but existing tool sets for
enabling specific agentic AI use cases are limited. ... It's also important to
note that agentic AI poses new challenges for software developers. One is the
risk that AI will make the wrong decisions. Like any LLM-based technology, AI
agents can hallucinate, causing them to perform in undesirable ways. For this
reason, it's tough to imagine entrusting high-stakes tasks to AI agents without
requiring a human to supervise and validate them. Agentic AI also poses security
risks. If agentic AI systems are compromised by threat actors, any tools or data
that AI agents can access (such as source code) could also be exposed.

The next phase of identity security must focus on phishing-resistant
authentication, seamless access, and decentralized identity management. The key
principle guiding this transformation is a principle of phishing resistance by
design. The adoption of FIDO2 and WebAuthn standards enables passwordless
authentication using cryptographic key pairs. Because the private key never
leaves the user’s device, attackers cannot intercept it. These methods eliminate
the weakest link — human error — by ensuring that authentication remains secure
even if users unknowingly interact with malicious links or phishing campaigns.
... By leveraging blockchain-based verified credentials — digitally signed,
tamper-evident credentials issued by a trusted entity — wallets enable users to
securely authenticate to multiple resources without exposing their personal data
to third parties. These credentials can include identity proofs, such as
government-issued IDs, employment verification, or certifications, which enable
strong authentication. Using them for authentication reduces the risk of
identity theft while improving privacy. Modern authentication must allow users
to register once and reuse their credentials seamlessly across services. This
concept reduces redundant onboarding processes and minimizes the need for
multiple authentication methods.

Seeking a job as a government CIO offers a chance to make a real impact on the
lives of citizens, says Aparna Achanta, security architect and leader at IBM
Consulting -- Federal.
CIOs typically
lead a wide range of projects, such as upgrading systems in education, public
safety, healthcare, and other areas that provide critical public services. "They
[government CIOs] work on large-scale projects that benefit communities beyond
profits, which can be very rewarding and impactful," Achanta observed in an
online interview. "The job also gives you an opportunity for leadership growth
and the chance to work with a wide range of departments and people." ... "Being
a government CIO might mean dealing with slow processes and bureaucracy,"
Achanta says. "Most of the time, decisions take longer because they have to go
through several layers of approval, which can delay projects.” Government CIOs
face unique challenges, including budget constraints, a constantly evolving
mission, and increased scrutiny from government leaders and the public. "Public
servants must be adept at change management in order to be able to pivot and
implement the priorities of their administration to the best of their ability,"
Tamburrino says. Government CIOs are often frustrated by a hierarchy that runs
at a far slower pace than their enterprise counterparts.
Watching your mental and physical health is critical. Setting boundaries is
something that helps the entire team, not just as a cyber leader. One rule we
have in my team is that we do not use work chat after business hours unless
there are critical events. Everyone needs a break and sometimes hearing a text
or chat notification can create undue stress. Another critical aspect of being a
cybersecurity professional is to hold to your integrity. People often do not
like the fact that we have to monitor, report, and investigate systems and human
behavior. When we get pushback for this with unprofessional behavior or
defensiveness, it can often cause great personal stress. ... Executive
leadership plays one of the most critical roles in supporting the CISO. Without
executive level support, we would be crushed by the demands and the frequent
conflicts of interest we experience. For example, project managers, CIOs, and
other IT leadership roles might prioritize budget, cost, timelines, or other
needs above security. A security professional prioritizes people (safety) and
security above cost or timelines. The nature of our roles requires executive
leadership support to balance the security and privacy risk (and what is
acceptable to an executive). I think in several instances the executive board
and CEOs understand this, but we are still a growing profession and there needs
to be more education in this area.

Relying solely on labeling tools faces multiple operational challenges. First,
labeling tools often lack accuracy. This creates a paradox: inaccurate labels
may legitimize harmful media, while unlabelled content may appear trustworthy.
Moreover, users may not view basic AI edits, such as color correction, as
manipulation, while opinions differ on changes like facial adjustments or
filters. It remains unclear whether simple colour changes require a label, or if
labeling should only occur when media is substantively altered or generated
using AI. Similarly, many synthetic media artifacts may not fit the standard
definition of pornography, such as images showing white substances on a person’s
face; however, they can often be humiliating. ... Second, synthetic media use
cases exist on a spectrum, and the presence of mixed AI- and human-generated
content adds complexity and uncertainty in moderation strategies. For example,
when moderating human-generated media, social media platforms only need to
identify and remove harmful material. In the case of synthetic media, it is
often necessary to first determine whether the content is AI-generated and then
assess its potential harm. This added complexity may lead platforms to adopt
overly cautious approaches to avoid liability. These challenges can undermine
the effectiveness of labeling.

Leadership in 2025 requires more than expertise; it demands adaptability,
compassion, and tech fluency. “Leadership today isn’t about having all the
answers; it’s about creating an environment where teams can sense, interpret,
and act with speed, autonomy, and purpose,” said Govind. As the learning journey
of Conduent pivots from stabilization to growth, he shared that the leaders need
to do two key things in the current scenario: be human-centric and be digitally
fluent. Similarly, Srilatha highlighted a fundamental shift happening among the
leaders: “Leaders today must lead with both compassion and courage while taking
tough decisions with kindness.” She also underlined the rising importance of the
three Rs in modern leadership: Reskilling, resilience, and rethinking. ...
Govind pointed to something deceptively simple: acting on feedback. “We didn’t
just collect feedback, we analyzed sentiment, made changes, and closed the loop.
That made stakeholders feel heard.” This approach led Conduent to experiment
with program duration, where they went from 12 to 8 to 6 months.’ “Learning is a
continuum, not a one-off event,” Govind added. ... Leadership development is no
longer optional or one-size-fits-all. It’s a business imperative—designed around
human needs and powered by digital fluency.

As AI applications extend to third parties, CISOs will need tailored audits of
third-party data, AI security controls, supply chain security, and so on.
Security leaders must also pay attention to emerging and often changing AI
regulations. The EU AI Act is the most comprehensive to date, emphasizing
safety, transparency, non-discrimination, and environmental friendliness.
Others, such as the Colorado Artificial Intelligence Act (CAIA), may change
rapidly as consumer reaction, enterprise experience, and legal case law evolves.
CISOs should anticipate other state, federal, regional, and industry
regulations. ... Established secure software development lifecycles should be
amended to cover things such as AI threat modeling, data handling, API security,
etc. ... End user training should include acceptable use, data handling,
misinformation, and deepfake training. Human risk management (HRM) solutions
from vendors such as Mimecast may be necessary to keep up with AI threats and
customize training to different individuals and roles. ... Simultaneously,
security leaders should schedule roadmap meetings with leading security
technology partners. Come to these meetings prepared to discuss specific needs
rather than sit through pie-in-the-sky PowerPoint presentations. CISOs should
also ask vendors directly about how AI will be used for existing technology
tuning and optimization.
"Many organizations know what data they are looking for and how they want to
process it but lack the in-house expertise to manage the platform itself," said
Matthew Weier O'Phinney, Principal Product Manager at Perforce OpenLogic. "This
leads to some moving to commercial Big Data solutions, but those that can't
afford that option may be forced to rely on less-experienced engineers. In which
case, issues with data privacy, inability to scale, and cost overruns could
materialize." ... EOL operating system, CentOS Linux, showed surprisingly high
usage, with 40% of large enterprises still using it in production. While CentOS
usage declined in Europe and North America in the past year, it is still the
third most used Linux distribution overall (behind Ubuntu and Debian), and the
top distribution in Asia. For teams deploying EOL CentOS, 83% cited security and
compliance as their biggest concern around their deployments. ... "Open source
is the engine driving innovation in Big Data, AI, and beyond—but adoption alone
isn't enough," said Gael Blondelle, Chief Membership Officer of the Eclipse
Foundation. "To unlock its full potential, organizations need to invest in their
people, establish the right processes, and actively contribute to the long-term
sustainability and growth of the technologies they depend on."
/dq/media/media_files/2024/10/22/9kpYE8VNMpfddG60WHio.png)
The CaaS market is a booming economy in the shadows, driving annual revenues
into billions. While precise figures are elusive due to its illicit nature,
reports suggest it's a substantial and growing market. CaaS contributes
significantly, and the broader cybersecurity services market is projected to
reach hundreds of billions of dollars in the coming years. If measured as a
country, cybercrime would already be the world's third-largest economy, with
projected annual damages reaching USD 10.5 trillion by 2025, as per some
cybersecurity ventures. This growth is fueled by the same principles that drive
legitimate businesses: specialisation, efficiency, and accessibility. CaaS
platforms function much like dark online marketplaces. They offer pre-made
hacking kits, phishing templates, and even access to already compromised
computer networks. These services significantly lower the entry barrier for
aspiring criminals. ... Enterprises must recognise that attackers often hit
multiple systems simultaneously—computers, user identities, and cloud
environments. This creates significant "noise" if security tools operate in
isolation. Relying on many disparate security products makes it difficult to
gain a holistic view and understand that seemingly separate incidents are often
part of a single, coordinated attack.

For developers, figuring out where things went wrong is difficult. In a survey
looking at the biggest challenges to observability, 58% of developers said that
identifying blind spots is a top concern. Stack traces may help, but they rarely
provide enough context to diagnose issues quickly; developers chase down
screenshots, reproduce problems, and piece together clues manually using the
metric and log data from APM tools; a bug that could take 30 minutes to fix ends
up consuming days or weeks. Meanwhile, telemetry data accumulates in massive
volumes—expensive to store and hard to interpret. Without tools to turn data
into insight, you’re left with three problems: high bills, burnout, and time
wasted fixing bugs—bugs that don’t have a major impact on core business
functions or drive revenue when increasing developer efficiency is a top
strategic goal at organizations. ... More than anything, we need a cultural
change. Observability must be built into products from the start. That means
thinking early about how we’ll track adoption, usage, and outcomes—not just
deliver features. Too often, teams ship functionality only to find no one is
using it. Observability should show whether users ever saw the feature, where
they dropped off, or what got in the way. That kind of visibility doesn’t come
from backend logs alone.