Quote for the day:
"If you genuinely want something, don't wait for it -- teach yourself to be impatient." -- Gurbaksh Chahal
How to Move from Manual to Automated to Autonomous Testing

As great as test automation is, it would be a mistake to put little emphasis on
or completely remove manual testing. Automated testing's strength is its ability
to catch issues while scanning code. Conversely, a significant weakness is that
it is not as reliable as manual testing in noticing unexpected issues that
manifest themselves during usability tests. While developing and implementing
automated tests, organizations should integrate manual testing into their
overall quality assurance program. Even though manual testing may not initially
benefit the bottom line, it definitely adds a level of protection against issues
that could wreak havoc down the road, with potential damage in the areas of
cost, quality, and reputation. ... The end goal is to have an autonomous testing
program that has a clear focus on helping the organization achieve its desired
business outcomes. There is a consistent theme in successfully developing and
implementing automated testing programs: planning and patience. With the right
strategy and a deliberate rollout, test automation opens the door to smoother
operations and the ability to remain competitive and profitable in the
ever-changing world of software development. To guarantee a successful
implementation of automation practices, it is necessary to invest in training
and creating best practices.
The Hidden Dangers of Artifactory Tokens: What Every Organization Should Know
If tokens with read access are dangerous, those with write permissions are
cybersecurity nightmares made flesh. They enable the most feared attack vector
in modern software: supply chain poisoning. The playbook is elegant in its
simplicity and devastating in its impact. Attackers identify frequently
downloaded packages within your Artifactory instance, insert malicious code into
these dependencies, then repackage and upload them as new versions. From there,
they simply wait as unsuspecting users throughout your organization
automatically upgrade to the compromised versions during routine updates. The
cascading damage expands exponentially depending on which components get
poisoned. Compromising build environments leads to persistent backdoors in all
future software releases. Targeting developer tools gives attackers access to
engineer workstations and credentials. ... The first line of defense must be
preventing leaks before they happen. Implementing secret detection tools that
can catch credentials before they're published to repositories. Establishing
monitoring systems can identify exposed tokens on public forums, even from
personal developer accounts. And following JFrog's evolving security guidance —
such as moving away from deprecated API keys — ensures you're not using
authentication methods with known weaknesses.
Is Model Context Protocol the New API?

With APIs, we learned that API design matters. Great APIs, like those from
Stripe or Twilio, were designed for the developer. With MCP, design matters too.
But who are we authoring for? You’re not authoring for a human; you’re authoring
for a model that will pay close attention to every word you write. And it’s not
just design, it’s the operationalization of MCP that is also important and
another point of parallelism with the world of APIs. As we used to say at
Apigee, there are good APIs and bad APIs. If your backend descriptions are
domain-centric — as opposed to business or end-user centric — integration,
adoption and developers’ overall ability to use your APIs will be impaired. A
similar issue can arise with MCP. An AI might not recognize or use an MCP
server’s tools if its description isn’t clear, action-oriented or AI friendly. A
final thing to note, which in many ways is very new to the AI world, is the fact
that every action is “on the meter.” In the LLM world, everything turns into
tokens, and tokens are dollars, as NVIDIA CEO Jensen Huang reminded us in his
Nvidia GTC keynote this year. So, AI-native apps — and by extension the MCP
servers that those apps connect to — need to pay attention to token optimization
techniques necessary for cost optimization. There’s also a question of resource
optimization outside of the token/GPU space.
CISOs must speak business to earn executive trust
The key to building broader influence is translating security into business
impact language. I’ve found success by guiding conversations around what
executives and customers truly care about: business outcomes, not technical
implementations. When I speak with the CEO or board members, I discuss how our
security program protects revenue, ensures business continuity and enables
growth. With many past breaches, organizations detected the threat but failed to
take timely action, resulting in significant business impact. By emphasizing how
our approach prevents these outcomes, I’m speaking their language. ...
Successfully shifting a security organization from being perceived as the
“department of no” to a strategic enabler requires a fundamental change in
mindset, engagement model and communication style. It begins with aligning
security goals to the broader business strategy, understanding what drives
growth, customer trust and operational efficiency. Security leaders must engage
cross-functionally early and often, embedding their teams within product
development, IT and go-to-market functions to co-create secure solutions rather
than imposing controls after the fact. This proactive, partnership-driven
approach reduces friction and builds credibility.
Enterprise IAM could provide needed critical mass for reusable digital identity

Acquisitions, different business goals, and even rogue teams can prevent a
single, unified platform from serving the whole organization. And then there are
partnerships, employees contracted to customers, customer onboarding and a host
of other situations that force identity information to move from an internal
system to another one. “The result is we end up building difficult, complicated
integrations that are hard to maintain,” Esplin says. Further, people want
services that providers can only deliver by receiving trusted information, but
people are hesitant to share their information. And then there are the attendant
regulatory concerns, particularly where biometrics are involved. Intermediaries
clearly have a big role to play. Some of those intermediaries may be AI agents,
which can ease data sharing, but does not address the central concern about how
to limit information sharing while delivering trust. Esplin argues for
verifiable credentials as the answer, with the signature of the issuer providing
the trust and the consent-based sharing model of VCs satisfying user’s desire to
limit data sharing. Because VCs are standardized, the need for complicated
integrations is removed. Biometric templates are stored by the user, enabling
strong binding without the data privacy concerns that come with legacy
architectures.
Beyond speed: Measuring engineering success by impact, not velocity
From a planning and accountability perspective, velocity gives teams a clean way
to measure output vs. effort. It can help them plan for sprints and prioritize
long-term productivity targets. It can even help with accountability, allowing
teams to rightsize their work and communicate it cross-departmentally. The
issues begin when it is used as the sole metric of success for teams, as it
fails to reveal the nuances necessary for high-level strategic thinking and
positioning by leadership. It sets up a standard that over-emphasizes pure
workload rather than productive effort towards organizational objectives. ...
When leadership works with their engineering teams to find solutions to business
challenges, they create a highly visible value stream between each individual
developer and the customer at the end of the line. For engineering-forward
organizations, developer experience and satisfaction is a top priority, so
factors like transparency and recognition of work go a long way towards
developer well-being. Perhaps most vital is for business and tech leaders to
create roadmaps of success for engineers that clearly align with the goals of
the overall business. LinearB cofounder and COO Lines acknowledges that these
business goals can differ wildly between businesses: “For some of the leaders
that I work with, real business impact might be as simple as, we got to get to
production faster…”
Sakana introduces new AI architecture, ‘Continuous Thought Machines’ to make models reason with less guidance — like human brains

Sakana AI’s Continuous Thought Machine is not designed to chase
leaderboard-topping benchmark scores, but its early results indicate that its
biologically inspired design does not come at the cost of practical capability.
On the widely used ImageNet-1K benchmark, the CTM achieved 72.47% top-1 and
89.89% top-5 accuracy. While this falls short of state-of-the-art transformer
models like ViT or ConvNeXt, it remains competitive—especially considering that
the CTM architecture is fundamentally different and was not optimized solely for
performance. What stands out more are CTM’s behaviors in sequential and adaptive
tasks. In maze-solving scenarios, the model produces step-by-step directional
outputs from raw images—without using positional embeddings, which are typically
essential in transformer models. Visual attention traces reveal that CTMs often
attend to image regions in a human-like sequence, such as identifying facial
features from eyes to nose to mouth. The model also exhibits strong calibration:
its confidence estimates closely align with actual prediction accuracy. Unlike
most models that require temperature scaling or post-hoc adjustments, CTMs
improve calibration naturally by averaging predictions over time as their
internal reasoning unfolds.
How to build (real) cloud-native applications

Cloud-native applications are designed and built specifically to operate in
cloud environments. It’s not about just “lifting and shifting” an existing
application that runs on-premises and letting it run in the cloud. Unlike
traditional monolithic applications that are often tightly coupled, cloud-native
applications are modular in a way that monolithic applications are not. A
cloud-native application is not an application stack, but a decoupled
application architecture. Perhaps the most atomic level of a cloud-native
application is the container. A container could be a Docker container, though
really any type of container that matches the Open Container Interface (OCI)
specifications works just as well. Often you’ll see the term microservices used
to define cloud-native applications. Microservices are small, independent
services that communicate over APIs—and they are typically deployed in
containers. A microservices architecture allows for independent scaling in an
elastic way that supports the way the cloud is supposed to work. While a
container can run on all different types of host environments, the most common
way that containers and microservices are deployed is inside of an orchestration
platform. The most commonly deployed container orchestration platform today is
the open source Kubernetes platform, which is supported on every major public
cloud.
Responsible AI as a Business Necessity: Three Forces Driving Market Adoption

AI systems introduce operational, reputational, and regulatory risks that must
be actively managed and mitigated. Organizations implementing automated risk
management tools to monitor and mitigate these risks operate more efficiently
and with greater resilience. The April 2024 RAND report, “The Root Causes of
Failure for Artificial Intelligence Projects and How They Can Succeed,”
highlights that underinvestment in infrastructure and immature risk management
are key contributors to AI project failures. ... Market adoption is the primary
driver for AI companies, while organizations implementing AI solutions seek
internal adoption to optimize operations. In both scenarios, trust is the
critical factor. Companies that embed responsible AI principles into their
business strategies differentiate themselves as trustworthy providers, gaining
advantages in procurement processes where ethical considerations are
increasingly influencing purchasing decisions. ... Stakeholders extend beyond
regulatory bodies to include customers, employees, investors, and affected
communities. Engaging these diverse perspectives throughout the AI lifecycle,
from design and development to deployment and decommissioning, yields valuable
insights that improve product-market fit while mitigating potential risks.
Leading high-performance engineering teams: Lessons from mission-critical systems
Conducting blameless post-mortems was imperative to focus on improving the
systems without getting into blame avoidance or blame games. Building trust
required consistency from me: admitting mistakes, getting feedback, going
through exercises suggesting improvements, and responding in a constructive way.
At the heart of this was creating the conditions for the team to feel safe
taking interpersonal risks, so it was my role to steer conversation towards
systemic factors that contributed to failures (“What process or procedures
change could prevent this?”) and I was regularly looking for the opportunity to
discuss, or later analyze, patterns across incidents so I could work towards
higher order improvements. ... For teams just starting out, my advice is to take
a staged approach. Pick one or two practices they can begin, evolve their plan
for how they will evolve the practice and some metrics for the team to realize
early value. Questions to ask yourselfHow comfortable are team members sharing
reliability concerns? Does your team look for ways to prevent incidents through
your reviews or look for ways to blame others? How often does your team practice
responding to failure? ... In my experience, leading top engineering teams
requires a set of skills. Building a strong technical culture, focusing on
people, guiding teams through difficult times, and establishing durable
practices.
No comments:
Post a Comment