How to hold Three Amigos meetings in Agile development
Three Amigos meetings remove uncertainty from development projects, as they
provide a specified time for everyone to get on the same page about what to --
or not to -- build. "The meeting exposes any potential assumptions and forces
explicit answers," said Jeff Sing, lead software QA engineer at Optimizely, a
digital experience optimization platform. "Everyone walks away with
crystal-clear guidelines on what will be delivered and gets ahead of any
potential scope creep." For example, a new feature entails new business
requirements, engineering changes, UX flow and design. Each team faces its own
challenges and requirements. The business requirements focus on a broad
problem space, and how to monetize the product. The engineering requirements
center on the technical solution and hurdles. The UX requirements define
product usability. The design requirements ensure the product looks finished.
All of these requirements might align -- or they might not. "This is why a
formalized meeting needs to occur to hash out how to achieve everyone's goals,
or which requirements will not be met and need to be dropped in order to build
the right product on the right time schedule," Sing said.
Key success factors behind intelligent automation
For an intelligent automation programme to really deliver, a strategy and
purpose is needed. This could be improving data quality, operational
efficiency, process quality and employee empowerment, or enhancing stakeholder
experiences by providing quicker, more accurate responses. Whatever the
rationale, an intelligent automation strategy must be aligned to the wider
needs of the business. Ideally, key stakeholders should be involved in
creating the vision; if they haven’t, engage them now. If they see intelligent
automation as a strategic business project, they’ll support it and provide the
necessary financial and human resources too. Although intelligent automation
is usually managed by a business team, it will still be governed by the IT
team using existing practices, so they must also be involved at the beginning.
IT will support intelligent automation on many critical fronts, such as
compliance with IT security, auditability, the supporting infrastructure, its
configuration and scalability. So intelligent automation can scale as demand
increases, plan where it sits within the business. A centralised approach
encompasses the entire organisation, so it may be beneficial to embed this
into a ‘centre of excellence’ (CoE) or start moving towards creating this
operating environment.,/div.
Quote for the day:
"Leadership is about carrying on when everyone else has given up" -- Gordon Tredgold
Why Most Organizations’ Investments in AI Fall Flat
A common mistake companies make is creating and deploying AI models using
Agile approaches fit for software development, like Scrum or DevOps. These
frameworks traditionally require breaking down a large project into small
components so that they can be tackled quickly and independently,
culminating in iterative yet stable releases, like constructing a building
floor by floor. However, AI is more like a science experiment than a
building. It is experiment-driven, where the whole model development life
cycle needs to be iterated—from data processing to model development and
eventually monitoring—and not just built from independent components.
These processes feed back into one another; therefore, a model is never
quite “done.” ... We know AI requires specialized skill sets—data
scientists remain highly sought-after hires in any enterprise. But it’s
not just the data scientists who build the models and product owners who
manage the functional requirements who are necessary in order for AI to
work. The emerging role of machine-learning engineer is required to help
scale AI into reusable and stable processes that your business can depend
on. Professionals in model operations (model ops) are specialized
technicians who manage post-deployment model performance and are
ultimately responsible for ongoing stability and continuity of operations.
Cybersecurity as a public good
The necessity to privately provision cyber security has resulted in a
significant gap between the demand for cyber security professionals and
the supply of professionals with appropriate skills. Multiple studies have
identified cyber security as the domain with one of the highest skills
gap. When a significant skills gap occurs in the market, it results in two
things. The remuneration demanded by the professionals will sky
rocket since there are many chasing the scarce
resources. Professionals who are not so skilled will also survive —
rather thrive — since lack of alternatives means they will continue to be
in demand. ... Security as a public good involves trade-offs with
privacy. Whether it is police patrols, or CCTV cameras — a trade-off with
privacy is imperative to make security a public good. The privacy
trade-off risks will be higher in the cyber world because technology would
provide the capability to conduct surveillance at larger scale and also
larger depth. It is crucial , delicate — and hence difficult — to strike
the right balance between security and privacy such that the extent of
privacy sacrificed meets the test of proportionality. However, the
complexity of the task, or the associated risks with it, should not
prevent us from getting out of the path down a rabbit hole.
The Art and Science of Architecting Continuous Intelligence
Loosely defined, machine data is generated by computers rather than
individuals. IoT equipment sensors, cloud infrastructure, security
firewalls and websites all throw off a blizzard of machine data that
measures machine status, performance and usage. In many cases the same
math can analyze machine data for distinct domains, identifying patterns,
outliers, etc. Enterprises have well-established processes such as
security information and event management (SIEM), and IT operations
(ITOps), that process machine data. Security administrators, IT managers
and other functional specialists use mature SIEM and ITOps processes on a
daily basis. Generally, these architectures perform similar functions as
in the first approach, although streaming is a more recent addition.
Another difference is that many machine-data architectures have more
mature search and index capabilities, as well as tighter integration with
business tasks and workflow. Data teams typically need to add the same two
functions to complete the CI picture. First, they need to integrate doses
of contextual data to achieve similar advantages as those outlined above.
Second, they need to trigger business processes, which in this case might
mean hooking into robotic process automation tools.
Fintech Startups Broke Apart Financial Services. Now The Sector Is Rebundling
When fintech companies began unbundling, the tools got better but
consumers ended up with 15 personal finance apps on their phones. Now, a
lot of new fintechs are looking at their offerings and figuring out how to
manage all of a person’s personal finances so that other products can be
enhanced, said Barnes. “We are not trying to be a bunch of products, but
more about how each product helps the other,” Barnes said. “If we offer a
checking account, we can see income coming in and be able to give you
better access to borrowing. That is the rebuild—how does fintech serve all
of the needs, and how do we leverage it for others?” Traditional banking
revolves around relationships for which banks can sell many products to
maximize lifetime value, said Chris Rothstein, co-founder and CEO of San
Francisco-based sales engagement platform Groove, in an interview.
Rebundling will become a core part of workflow and a way for fintechs to
leverage those relationships to then be able to refer them to other
products, he said. “It makes sense long-term,” Rothstein said in an
interview. “In financial services, many people don’t want all of these
organizations to have their sensitive data. Rebundling will also force
incumbents to get better.”
Microsoft Glazes 5G Operator Strategy
Microsoft’s 5G strategy links the private Azure Edge Zones service it
announced earlier this year, Azure IoT Central, virtualized evolved packet
core (vEPC) software it gained by acquiring Affirmed Networks, and
cloud-native network functions it brought onboard when it acquired
Metaswitch Networks. Combining those services under a broader portfolio
allows Microsoft to “deliver virtualized and/or containerized network
functions as a service on top of a cloud platform that meets the operators
where they are, in a model that is accretive to their business,” Hakl
said. “We want to harness the power of the Azure ecosystem, which
means the developer ecosystem, to help [operators] monetize network
slicing, IoT, network APIs … [and] use the power of the cloud” to create
the same type of elastic and scalable architecture that many enterprises
rely on today, he explained. That vision is split into two parts: the
Azure Edge Zones, which effectively extends the cloud to a private edge
environment, and the various pieces of software that Microsoft has
assembled for network operators. On the latter, Hakl said Microsoft “could
have gone out and had our customers teach us that over time. Instead, we
acquired two companies that brought in hundreds of engineers that have
telco DNA and understand the space.”
Artificial intelligence for brain diseases: A systematic review
Among the various ML solutions, Deep Neural Networks (DNNs) are nowadays
considered as the state-of-the-art solution for many problems, including
tasks on brain images. Such human brain-inspired algorithms have been proven
to be capable of extracting highly meaningful statistical patterns from
large-scale and high-dimensional datasets. A DNN is a DL algorithm aiming to
approximate some function f ∗. For example, a classifier can be seen as a
function y = f * ( x , θ ) mapping a given input x to a category labeled as
y. θ is the vector of parameters that the model learns in order to make the
best approximation of f ∗. Artificial Neural Networks (ANNs) are built out
of a densely interconnected set of simple units, where each unit takes a
number of real-valued inputs (possibly the outputs of other units) and
produces a single real-valued output (which may become the input to many
other units). DNNs are called networks because they are typically
represented by composing together many functions. The overall length of the
chain gives the depth of the model; from this terminology, the name “deep
learning” arises.
Things to Consider about Brain-Computer Interface Tech
A BCI is a system that provides a direct connection between your brain and
an electronic device. Since your brain runs on electrical signals like a
computer, it could control electronics if you could connect the two. BCIs
attempt to give you that connection. There are two main types of BCI —
invasive and non-invasive. Invasive devices, like the Neuarlink chip,
require surgery to implant them into your brain. Non-invasive BCIs, as you
might’ve guessed, use external gear you wear on your head instead. ... A
recent study suggested that brain-computer interface technology and
NeuraTech in general could measure worker comfort levels in response to
their environment. They could then automatically adjust the lights and
temperature to make workers more comfortable and minimize distractions.
Since distractions take up an average of 2.1 hours a day, these BCIs could
mean considerable productivity boosts. The Department of Defense is
developing BCIs for soldiers in the field. They hope these devices could let
troops communicate silently or control drones with their minds. As promising
as BCIs may be, there are still some lingering concerns with the technology.
While the Neuralink chip may be physically safe, it raised a lot of
questions about digital security.
Microsoft did some research. Now it's angry about what it found
A fundamental problem, said Brill is the lack of trust in society today. In
bold letters, she declared: "The United States has fallen far behind the rest
of the world in privacy protection." I can't imagine it's fallen behind
Russia, but how poetic if that was true. Still, Brill really isn't happy with
our government: "In total, over 130 countries and jurisdictions have enacted
privacy laws. Yet, one country has not done so yet: the United States." Brill
worries our isolation isn't too splendid. She mused: "In contrast to the role
our country has traditionally played on global issues, the US is not leading,
or even participating in, the discussion over common privacy norms." That's
like Microsoft not participating in the creation of excellent smartphones.
It's not too smart. Brill fears other parts of the world will continue to lead
in privacy, while the US continues to lead in inaction and chaos. It sounds
like the whole company is mad as hell and isn't going to take it anymore. Yet
it's not as if Microsoft has truly spent the last 20 years championing privacy
much more than most other big tech companies. In common with its west coast
brethren, it's been too busy making money.
Quote for the day:
"Leadership is about carrying on when everyone else has given up" -- Gordon Tredgold
No comments:
Post a Comment