Quote for the day:
"Limitations live only in our minds. But
if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti

According to Digital Twin researcher Julian Gebhard, the industry is moving
toward integrated federated systems that allow seamless data exchange and
synchronization across tools and platforms. These systems rely on semantic
models and knowledge graphs to ensure interoperability and data integrity
throughout the product development process. By structuring data as semantic
triples (e.g. (Car) → (is colored) → (blue)) data is traversable, transforming
raw data to knowledge. Furthermore, it becomes machine-readable, an enabler
for collaboration across departments making development more efficient and
consistent. The next step is to use Knowledge Graphs to model product data on
a value level, instead only connecting metadata. They enable dynamic feedback
loops across systems, so that changes in one area, such as simulation results
or geometry updates, can automatically influence related systems. This helps
maintain consistency and accelerates iteration during development. Moreover,
when functional data is represented at the value level, it becomes possible to
integrate disparate systems such as simulation and CAD tools into a unified,
holistic viewer. In this integrated model, any change in geometry in one
system automatically triggers updates in simulation parameters and physical
properties, ensuring that the digital twin evolves in tandem with the actual
product.

AI agents are generally better than generative AI models at organizing,
surfacing, and evaluating data. In theory, this makes them less prone to
hallucinations. From the HBR article: “The greater cognitive reasoning of
agentic AI systems means that they are less likely to suffer from the
so-called hallucinations (or invented information) common to generative AI
systems. Agentic AI systems also have [a] significantly greater ability to
sift and differentiate information sources for quality and reliability,
increasing the degree of trust in their decisions.” ... Agentic AI is a
paradigm shift on the order of the emergence of LLMs or the shift to SaaS.
That is to say, it’s a real thing, but we’re not yet close to understanding
exactly how it will change the way we live and work just yet. The adoption
curve for agentic AI will have its challenges. There are questions wherever
you look: How do you put AI agents into production? How do you test and
validate code generated by autonomous agents? How do you deal with security
and compliance? What are the ethical implications of relying on AI agents?
As we all navigate the adoption curve, we’ll do our best to help our
community answer these questions. While building agents might quickly become
easier, solving for these downstream impacts is still incomplete.

Forward-thinking companies are now applying cloud native principles to
contract management. Just as infrastructure became code with tools like
Terraform and Ansible, we’re seeing a similar transformation with business
agreements becoming “contracts-as-code.” This shift integrates critical
contract information directly into the CI/CD pipeline through APIs that
connect legal document management with operational workflows. Contract experts
at ContractNerds highlight how API connections enable automation and improve
workflow management beyond what traditional contract lifecycle management
systems can achieve alone. Interestingly, this cloud native contract
revolution hasn’t been led by legal departments. From our experience working
with over 1,500 companies, contract ownership is rapidly shifting to finance
and operations teams, with CFOs becoming the primary stakeholders in contract
management systems. ... As cloud native architectures mature, treating
business contracts as code becomes essential for maintaining velocity.
Successful organizations will break down the artificial boundary between
technical contracts (APIs) and business contracts (legal agreements), creating
unified systems where all obligations and dependencies are visible, trackable,
and automatable.

Persistent memory could be hugely useful for work. Julian Wiffen, Chief of AI
and Data Science at Matillion, a data integration platform with AI built in,
sees strong use cases: “It could improve continuity for long-term projects,
reduce repeated prompts, and offer a more tailored assistant experience," he
says. But he’s also wary. “In practice, there are serious nuances that users,
and especially companies, need to consider.” His biggest concerns here are
privacy, control, and data security. ... OpenAI stresses that users can still
manage memory – delete individual memories that aren't relevant anymore, turn
it off entirely, or use the new “Temporary Chat” button. This now appears at
the top of the chat screen for conversations that are not informed by past
memories and won't be used to build new ones either. However, Wiffen says that
might not be enough. “What worries me is the lack of fine-grained control and
transparency,” he says. “It's often unclear what the model remembers, how long
it retains information, and whether it can be truly forgotten.” ... “Even
well-meaning memory features could accidentally retain sensitive personal data
or internal information from projects. And from a security standpoint,
persistent memory expands the attack surface.” This is likely why the new
update hasn't rolled out globally yet.

Are you confused about what President Donald J. Trump is doing with tariffs?
Join the crowd; we all are. But if you’re in charge of buying PCs for your
company (because Windows 10 officially reaches end-of-life status on Oct. 14)
all this confusion is quickly turning into worry. Before diving into what this
all means, let’s clarify one thing: you will be paying more for your
technology gear — period, end of statement. ... As Ingram Micro CEO Paul Bay
said in a CRN interview: “Tariffs will be passed through from the OEMs or
vendors to distribution, then from distribution out to our solution
providers and ultimately to the end users.” It’s already happening.
Taiwan-based computing giant Acer’s CEO, Jason Chen, recently spelled it out
cleanly: “10% probably will be the default price increase because of the
import tax. It’s very straightforward.” When Trump came into office, we all
knew there would be a ton of tariffs coming our way, especially on Chinese
products such as Lenovo computers, or products largely made in China, such as
those from Apple and Dell. ... But wait! It gets even murkier. Apparently that
tariff “relief” is temporary and partial. US Commerce Secretary Howard Lutnick
has already said that sector-specific tariffs targeting electronics are
forthcoming, “probably a month or two.” Just to keep things entertaining,
Trump himself has at times contradicted his own officials about the scope and
duration of the exclusions.
Li suggests companies look at how AI is integrated across the entire value
chain. "To realize business value, you need to improve the whole value
chain, not just certain steps." According to her, a comprehensive value
chain framework includes suppliers, employees, customers, regulators,
competitors, and the broader marketplace environment. For example, Li
explains that when AI is applied internally to support employees, the focus
is often on boosting productivity. However, using AI in customer-facing
areas directly affects the products or services being delivered, which
introduces higher risk. Similarly, automating processes for efficiency could
influence interactions with suppliers — raising the question of whether
those suppliers are prepared to adapt. ... Speaking of organizational
challenges, Li discusses how positioning AI in business and positioning AI
teams in organizations is critical. Based on the organization’s level of
readiness and maturity, it could have a centralized or distributed, or
federated model, but the focus should be on people. Thereafter, Li reminds
that the organizational governance processes are related to its people,
activities, and operating model. She adds, “If you already have an
investment, evaluate and adjust your investment expectations based on the
exercise.”

The problem is that institutionalization without or with poor regulation – and
we see algorithms as institutions – tends to move in an extractive direction,
undermining development. If development requires technological innovation,
Acemoglu, Johnson, and Robinson taught us that inclusive institutions that are
transparent, equitable, and effective are needed. In a nutshell, long-term
prosperity requires democracy and its key values. We must, therefore,
democratize the institutions that play such a key role in shaping our contexts
of interaction by affecting individual behaviors with collective implications.
The only way to make algorithms more democratic is by regulating them, i.e.,
by creating rules that establish key values, procedures, and practices that
ought to be respected if we, as members of political communities, are to have
any control over our future. Democratic regulation of algorithms demands forms
of participation, revisability, protection of pluralism, struggle against
exclusion, complex output accountability, and public debate, to mention a few
elements. We must bring these institutions closer to democratic principles, as
we have tried to do with other institutions. When we consider inclusive
algorithmic institutions, the value of equality plays a crucial role—often
overlapping with the principle of participation.

The problem is the ease of access to AI tools, and a work environment that
increasingly advocates the use of AI to improve corporate efficiency. It is
little wonder that employees seek their own AI tools to improve their personal
efficiency and maximize the potential for promotion. It is frictionless, says
Michael Marriott, VP of marketing at Harmonic Security. “Using AI at work
feels like second nature for many knowledge workers now. Whether it’s
summarizing meeting notes, drafting customer emails, exploring code, or
creating content, employees are moving fast.” If the official tools aren’t
easy to access or if they feel too locked down, they’ll use whatever’s
available which is often via an open tab on their browser. There is almost
also never any malicious intent (absent, perhaps, the mistaken employment of
rogue North Korean IT workers); merely a desire to do and be better. If this
involves using unsanctioned AI tools, employees will likely not disclose their
actions. The reasons may be complex but combine elements of a reluctance to
admit that their efficiency is AI assisted rather than natural, and knowledge
that use of personal shadow AI might be discouraged. The result is that
enterprises often have little knowledge of the extent of Shadow IT, nor the
risks it may present.

The rise of AI-generated IDs poses a serious threat to digital transactions
for three key reasons.The physical and digital processes businesses use to
catch fraudulent IDs are not created equal. Less sophisticated solutions may
not be advanced enough to identify emerging fraud methods. With AI-generated
ID images readily available on the dark web for as little as $5, ownership and
usage are proliferating. IDScan.net research from 2024 demonstrated that 78%
of consumers pointed to the misuse of AI as their core fear around identity
protection. Equally, 55% believe current technology isn’t enough to protect
our identities. Left unchallenged, AI fraud will damage consumer trust,
purchasing behavior, and business bottom lines. Hiding behind the furor of
nefarious, super-advanced AI, generating AI IDs is fairly rudimentary. Darkweb
suppliers rely on PDF417 and ID image generators, using a degree of automation
to match data inputs onto a contextual background. Easy-to-use tools such as
Thispersondoesnotexist make it simple for anyone to cobble together a quality
fake ID image and a synthetic identity. To deter potential AI-generated fake
ID buyers from purchasing, the identity verification industry needs to
demonstrate that our solutions are advanced enough to spot them, even as they
increase in quality.

A Raspberry Pi may seem forgiving regarding power needs, but undervaluing its
requirements can lead to sudden shutdowns and corrupted data. Cloud services
that rely on a stable connection to read and write data need consistent energy
for safe operation. A subpar power supply might struggle under peak usage,
leading to instability or errors. Ensuring sufficient voltage and amperage is
key to avoiding complications. A strong power supply reduces random reboots
and performance bottlenecks. When the Pi experiences frequent resets, you risk
damaging your data and your operating system’s integrity. In addition, any
connected external drives might encounter file system corruption, harming
stored data. Taking steps to confirm your power setup meets recommended
standards goes a long way toward keeping your cloud server running reliably.
... A personal cloud server can create a false sense of security if you forget
to establish a backup routine. Files stored on the Pi can be lost due to
unexpected drive failures, accidents, or system corruption. Relying on a
single storage device for everything contradicts the data redundancy
principle. Setting up regular backups protects your data and helps you restore
from mishaps with minimal downtime. Building a reliable backup process means
deciding how often to copy your files and choosing safe locations to store
them.
No comments:
Post a Comment