Quote for the day:
“Challenges are what make life
interesting and overcoming them is what makes life meaningful.” --
Joshua J. Marine

The key to success is engaging end-users and stakeholders in developing the
goals and requirements around features and user stories. ... GenAI should
help agile teams incorporate more design thinking practices and increase
feedback cycles. “GenAI tools are fundamentally shifting the role of product
owners and business analysts by enabling them to prototype and iterate on
requirements directly within their IDEs rapidly,” says Simon Margolis,
Associate CTO at SADA. “This allows for more dynamic collaboration with
stakeholders, as they can visualize and refine user stories and acceptance
criteria in real time. Instead of being bogged down in documentation, they can
focus on strategic alignment and faster delivery, with AI handling the
technical translation.” ... “GenAI excels at aligning user stories and
acceptance criteria with predefined specs and design guidelines, but the
original spark of creativity still comes from humans,” says Ramprakash
Ramamoorthy, director of AI research at ManageEngine. “Analysts and product
owners should use genAI as a foundational tool rather than relying on it
entirely, freeing themselves to explore new ideas and broaden their thinking.
The real value lies in experts leveraging AI’s consistency to ground their
work, freeing them to innovate and refine the subtleties that machines cannot
grasp.”

As security measures around production environments strengthen, which they
have, attackers are shifting left—straight into the software development
lifecycle (SDLC). These less-protected and complex environments have become
prime targets, where gaps in security can expose sensitive data and derail
operations if exploited. That’s why recognizing the warning signs of nefarious
behavior is critical. But identification alone isn’t enough—security and
development teams must work together to address these risks before attackers
exploit them. ... Abnormal spikes in repository cloning activity may indicate
potential data exfiltration from Software Configuration Management (SCM)
tools. When an identity clones repositories at unexpected volumes or times
outside normal usage patterns, it could signal an attempt to collect source
code or sensitive project data for unauthorized use. ... While cloning is a
normal part of development, a repository that is copied but shows no further
activity may indicate an attempt to exfiltrate data rather than legitimate
development work. Pull Request approvals from identities lacking repository
activity history may indicate compromised accounts or an attempt to bypass
code quality safeguards. When changes are approved by users without prior
engagement in the repository, it could be a sign of malicious attempts to
introduce harmful code or represent reviewers who may overlook critical
security vulnerabilities.

The rapid evolution of AI and data-centric technologies is forcing
organizations to rethink how they structure and govern their information
assets. Enterprises are increasingly moving from domain-driven data
architectures — where data is owned and managed by business domains — to
AI/ML-centric data models that require large-scale, cross-domain integration.
Questions arise about whether this transition is compatible with traditional
EA practices. The answer: While there are tensions, the shift is not
fundamentally at odds with EA but rather demands a significant transformation
in how EA operates. ... Governance in an agentic architecture flips the script
for EA by shifting focus to defining the domain authority of the agent to
participate in an ecosystem. That encompasses the system they can interact
with, the commands they can execute, the other agents they can interact with,
the cognitive models they rely on and the goals that are set for them.
Ensuring agents are good corporate citizens means enterprise architects must
engage with business units to set the parameters for what an agent can and
cannot do on behalf of the business. Further, the relationship and those
parameters must be “tokenized” to authenticate the capacity to execute those
actions.

“We’re really trying to help regulate the use of your geolocation data,” says
the bill’s author, Democratic Assemblymember Chris Ward, who represents
California’s 78th district, which covers parts of San Diego and surrounding
areas. “You should not be able to sell, rent, trade, or lease anybody’s
location information to third parties, because nobody signed up for that.”
Among types of personal information, location data is especially sensitive. It
reveals where people live, work, worship, protest, and seek medical care. It
can expose routines, relationships, and vulnerabilities. As stories continue
to surface about apps selling location data to brokers, government workers,
and even bounty hunters, the conversation has expanded. What was once a debate
about privacy has increasingly become a concern over how the exposure of this
data infringes upon fundamental civil liberties. “Geolocation is very
revealing,” says Justin Brookman, the director of technology policy at
Consumer Reports, which supported the legislation. “It tells a lot about you,
and it also can be a public safety issue if it gets into the wrong person’s
hands.” ... Equally troubling, Ward argues, is who benefits. The companies
collecting and selling this data are driven by profit, not transparency. As
scholar Shoshana Zuboff has argued, surveillance capitalism doesn’t thrive
because users want personalized ads.

From day one, I emphasise that digital transformation isn’t just about
adopting new tools—it’s about aligning those tools with business objectives,
improving internal processes, and responding to changing customer
expectations. To bring this to life, I use a blended approach that combines
theory with real-world practice. Students explore frameworks and models that
explain how businesses adapt to technological change, and then apply these to
real case studies from global companies, SMEs, and my own entrepreneurial
experiences. These examples give them insight into how digital transformation
plays out in areas like operations, marketing, and customer relationship
management (CRM). Active learning is central to my teaching. I use group work,
live problem-solving, digital tool demonstrations, and hands-on simulations to
help students experience digital transformation in action. I also introduce
them to established business platforms and emerging technologies, encouraging
them to assess their value and strategic impact. Ultimately, I aim to create
an environment where students don’t just learn about digital
transformation—they think like digital leaders, able to question, analyse, and
apply what they’ve learned in real organisational contexts.
The perception of security as a barrier is a challenge faced by many
organizations, especially in environments where innovation is prioritized. The
solution lies in shifting the narrative: Security are care givers for the value
created in this organization. Most scientists and executives already understand
the consequences of a cyberattack—lost research, stolen intellectual property,
and disrupted operations. We involve them in the process. When lab leaders feel
that their input has shaped security protocols, they’re more likely to support
and champion those initiatives. Co-creating solutions ensures that security
controls are not only effective but also practical for the scientific workflow.
In short, building trust, demonstrating empathy for their challenges, and
proving the value of security through action are what ultimately win buy-in. ...
Shadow IT is a reality in any organization, but it’s particularly prevalent in
environments like ours, where creativity and experimentation often outpace
formal approval processes. While it’s important to communicate the risks of
shadow IT clearly, we also recognize that outright bans are rarely effective.
Instead, we focus on enabling secure alternatives. In the broader organization,
we use tools to detect and prevent shadow IT, combined with strict communication
around approved solutions.

With LastPass's browser extension for password management already
well-positioned to observe -- and even restrict -- employee web usage, the
security company has announced that it's diversifying into SaaS monitoring for
small to midsize enterprises (SMEs). SaaS monitoring is part of a larger
technology category known as SaaS Identity and Access Management, or SaaS IAM.
As more employees are drawn to AI to improve productivity, the company is
pitching an affordable solution to help SMEs contain the risks and costs
associated with shadow SaaS; an umbrella of rogue SaaS procurement that's
inclusive of shadow IT and its latest variant -- shadow AI. ... LastPass sees
the new capabilities aligning with an organization's business objectives in a
variety of ways. "One could be compliance," MacLennan told ZDNET. "Another could
be the organization's internal sense of risk and risk management. Another could
be cost because we're surfacing apps by category, in which case you'll see the
whole universe of duplicative apps in use." MacLennan also noted that the new
offering makes it easy to reduce costs due to the over-provisioning of SaaS
licenses. For example, an organization is paying for 100 seats of some SaaS
solution while the SaaS monitoring tool reveals that only 30 of those licenses
are in active use.

ISO 42001 is particularly relevant for organisations operating within layered
supply chains, especially those building on cloud platforms. For these
environments, where infrastructure, platform and software providers each play a
role in delivering AI-powered services to end users, organisations must maintain
a clear chain of responsibility and vendor due diligence. By defining roles
across the shared responsibility model, ISO 42001 helps ensure that governance,
compliance and risk management are consistent and transparent from the ground
up. Doing so not only builds internal confidence but also enables partners and
providers to demonstrate trustworthiness to customers across the value chain. As
a result, trust management becomes a vital part of the picture by delivering an
ongoing process of demonstrating transparency and control around the way
organisations handle data, deploy technology, and meet regulatory expectations.
Rather than treating compliance as a static goal, trust management introduces a
more dynamic, ongoing approach to demonstrating how AI is governed across an
organisation. By operationalising transparency, it becomes much easier to
communicate security practices and explain decision-making processes to provide
evidence of responsible development and deployment.

When disaster strikes, employees may be without electricity, internet, or cell
service for days or weeks. They may have to evacuate their homes. They may be
struggling with the loss of family members, friends, or neighbors. Just as
organizations have disaster mitigation and recovery plans for main offices and
data centers, they should be prepared to support remote employees in disaster
situations they likely have never encountered before. Employers must counsel
workers on what to do, provide additional resources, and above all, ensure that
their mental health is attended to. ... Beyond cybersecurity risks, being forced
to leave their home environment presents employees with another significant
challenge: the potential loss of personal artifacts, from tax documents and
family heirlooms to cherished photos. Lahiri refers to the process of
safeguarding such items as “personal disaster recovery planning” and notes that
this aspect of worker support is often overlooked. While companies have
experience migrating servers from local offices to distributed teams, few have
considered how to support employees on a personal level, he says. Lahiri urges
IT teams to take a more empathetic approach and broaden their scope to include
disaster recovery planning for employees’ home offices.
/articles/practical-design-patterns-modern-ai-systems/en/smallimage/practical-design-patterns-modern-ai-systems-thumbnail-1747122253617.jpg)
Prompting might seem trivial at first. After all, you send free-form text to a
model, so what could go wrong? However, how you phrase a prompt and what context
you provide can drastically change your model's behavior, and there's no
compiler to catch errors or a standard library of techniques. ... Few-Shot
Prompting is one of the most straightforward yet powerful prompting approaches.
Without examples, your model might generate inconsistent outputs, struggle with
task ambiguity, or fail to meet your specific requirements. You can solve this
problem by providing the model with a handful of examples (input-output pairs)
in the prompt and then providing the actual input. You are essentially providing
training data on the fly. This allows the model to generalize without
re-training or fine-tuning. ... If you are a software developer trying to solve
a complex algorithmic problem or a software architect trying to analyze complex
system bottlenecks and vulnerabilities, you will probably brainstorm various
ideas with your colleagues to understand their pros and cons, break down the
problem into smaller tasks, and then solve it iteratively, rather than jumping
to the solution right away. In Chain-of-Thought (CoT) prompting, you encourage
the model to follow a very similar process and think aloud by breaking the
problem down into a step-by-step process.
No comments:
Post a Comment