Technical Debt In Machine Learning System – A Model Driven Perspective
The biggest System Technical Debt with Machine Learning models is
Explainability. With gaining popularity and its successful application in many
domains, Machine Learning (ML) also faced increased skepticism and criticism. In
particular, people question whether their decisions are well-grounded and can be
relied on. As it is hard to comprehensively understand their inner workings
after being trained, many ML systems — especially deep neural networks — are
essentially considered black boxes. This makes it hard to understand and explain
the behavior of a model. However, explanations are essential to trust that the
predictions made by models are correct. This is particularly important when ML
systems are deployed in decision support systems in sensitive areas impacting
job opportunities or even prison sentences. Explanations also help to correctly
predict a model’s behavior, which is necessary to avoid silly mistakes and
identify possible biases. Furthermore, they help to gain a well-grounded
understanding of a model, which is essential for further improvement and to
address its shortcomings.
Monoliths to Microservices: 8 Technical Debt Metrics to Know
Technical debt is a major impediment to innovation and development velocity for
many enterprises. Where is it? How do we tackle it? Can we calculate it in a way
that helps us prioritize application modernization efforts? Without a
data-driven approach, you may find your team falling into the 79% of
organizations whose application modernization initiatives end in failure. In
other articles, we’ve discussed the challenges of identifying, calculating and
managing technical debt. ... How can you tell if the technical debt in your
monolithic application is actually hurting your business? One of the most
important metrics that determines investment decisions behind application
modernization initiatives is “How much does it cost to keep around? The cost of
innovation metric (Image 1) shows a breakdown that makes sense to executive
decision-makers. How much, for each dollar spent, goes to simply maintaining the
application, and how much goes toward innovating new features and
functionality?
Major shift detected in smart home technology deployment
One of the key trends revealed was that home tech users’ growing appetite for
internet of things (IoT) and smart home technologies shows no sign of slowing
down. The study found that on a global basis, the average number of connected
devices per home stood at 17.1 at the end of June 2022, up 10% compared with the
same period a year previously. Europe showed the biggest change, with the
average number of connected devices per Plume household increasing by 13% to
17.4. Plume-powered homes in the US were found to have the highest penetration
of connected devices to date, with an average of 20.2 per home. With up to 10%
more devices in Plume-powered households, there was an upward trend (11%) in
data consumption across the Plume Cloud. However, the biggest decrease in data
consumption was seen in fitness bikes, down by 23%, which likely reflects a
change in consumer behaviour, with people returning to the office and exercising
outdoors or at the gym as they adjust to the post-pandemic world of hybrid
working.
Edge device onboarding: What architects need to consider
Your solution must also take device security into account. As part of every
deployment, you will probably need to include sensitive data, such as passwords,
certificates, tokens, or keys. How do you plan to distribute them? If you decide
to inject those items into the images or templates, you create risk, since
someone could access the image and extract that sensitive information. It's
better to have the device download them at installation time using a secure
channel. This means the edge device has to download these secrets from your
central server. But how will you set up that secure channel? You could use
encrypted communications or a virtual private network (VPN) tunnel, but that's
not enough. How can you be sure that the device is what it says it is and not a
possible attacker trying to steal information or gain access to your network?
You have another concern: authentication and authorization. Authentication is
even more important, especially for companies that use third-party providers to
create the device images or add other value to the supply chain.
Governing Microservices in an Enterprise Architecture
Microservice development works best in a domain-driven architecture, which
models the applications based on the organization’s real-world challenges. A
domain-driven architecture assesses the enterprise infrastructure in light of
business requirements and how to fulfill them. Most organizations already have a
domain-driven design strategy in place that maps the architecture to business
capabilities. Bounded Context is a strategy that is part of domain-driven
design. Autonomous teams responsible for microservices are formed around areas
of responsibility such as inventory management, product discovery, order
management, and online transactions, i.e., bounded context. The domain expertise
resides within the team, so the enterprise architect’s responsibility is to
guide development to align with strategic goals, balancing immediate needs and
future business objectives. When governing microservices as part of the
enterprise, applying the C4 model for software architecture—context, containers,
components and code—makes sense.
The clash of organizational transformation and linear thinking
The task of organizational transformation in a complex world can be likened to
that of herding cats. An extremely linear thinker, faced with 20 cats on the
left side of a room and wanting to move them to the right, might pick up one
cat, move it to the right, and repeat. Of course, that cat is unlikely to stay
on the right side of the room, and our linear thinker is unlikely to outlast 20
cats. But it is possible to set conditions that will cause most, if not all, of
the cats to end up on the right, like tilting the floor. ... Defining a clear
purpose for an organizational transformation calls upon one of the most basic
tasks of leadership: to show people the way forward, and to show why the new
world they are being asked to build is superior to the old. The transformation
must express the possibility of a new order and must be anchored in what would
be considered breakthrough results. Without this clear purpose, the effort
required to successfully transform the organization will not seem worthy of
commitment on the part of those required to put it into action.
Why Today's Businesses Need To Focus On Digital Trust And Safety
Consumers are paying for the cybersecurity mistakes made by corporations.
Ransomware continues to affect consumers, businesses, critical infrastructure
and government entities, costing them millions of dollars. In 2021, more than 22
billion personal records were exposed in data breaches, with the Covid-19
pandemic accelerating credit card fraud and phishing attacks. All of this has
left consumers more worried than ever about the privacy of their sensitive data.
... Websites and mobile apps rely on third parties to provide rich features like
shopping carts, online payment, advertising, AI-based chat and customer support.
But third-party code is rarely monitored for safety as today’s security tools
lack the necessary insight. The result is enterprise digital assets are
manipulated into channels that enable credit card skimming attacks, malicious
advertising (malvertising), targeted ransomware delivery and worse. As this
activity continues to rise, consumers feel increasingly less safe using their
favorite platforms.
5 Steps to Successfully Reinvent Your Organization
Don't wait for something catastrophic to occur before you start trying to
reinvent your business. Oftentimes, you will start to notice small, clear
signals. Recognizing these warning signs early can mean the difference between a
smooth reinvention process and one that's painful or difficult. What signals
should you look out for? Take the job market, for example. We know that
employees are leaving their jobs in record numbers. Microsoft found that as many
as 41% of workers have plans to quit in the near future. The reasons, according
to a Pew Research Center survey, are low pay (63%), lack of advancement
opportunities (63%) and feeling disrespected at work (57%). Although salary
increases might not be in the budget this year, you can stave off issues by
reinventing your organization's culture or approach to advancement. ... Use your
entire team's input and advice when trying to identify opportunities for
experimentation. Arrive at a decision, execute, learn, and move on. If you fail,
pivot quickly. Using agile methods when reinventing creates an environment where
experimentation is safe and there is tolerance for failure.
The Applications Of Data Science And The Need For DevOps
The importance of DevOps cannot be overstated. DevOps are experts who help
developers, data scientists, and IT professionals collaborate on projects.
Project managers, or their chain of command, oversee the work of developers.
They constantly seek to acquire all product characteristics as quickly as
possible. Regarding the IT professionals, they ensure that all networks,
firewalls, and servers are operating correctly. For data scientists, this
entails changing every model variable and structure. You might be wondering
why DevOps is important in this industry. The solution is fairly
straightforward. They serve as a liaison between developers and IT. DevOps has
many key features, some of which are testing, packaging, integration, and
deployment. They also deal with cybersecurity, in addition. ... Programming
errors are the leading cause of the team’s failure. DevOps encourages regular
code versions due to the constrained development cycle. This makes finding the
flawed codes relatively straightforward. With this, the team may use their
time better by employing robust programming concepts to reduce the likelihood
of implementation failure.
How to Test Low Code Applications
In a low code platform, you build an application by means of a user interface.
For instance, building screens by dragging and dropping items and building logic
using process-like flows. This sounds simple but it can be very complex and
error-prone. We’ve seen four generations of low code applications. First, there
were small, simple, stand-alone applications. Then we have small apps on top of
SAP, Oracle Fusion or Microsoft Dynamics. The third generation were
business-critical but still small apps to offer extra functionality besides the
ERP system. With these apps, you don’t have a workaround. Now we’re building
big, complex, business-critical core systems that should be reliable, secure and
compliant. The level of testing increases with every generation and in the
fourth generation, we see that testing is only slightly different from testing
high code applications. ... Testing is important if you want to limit the risks
when you go into production. Especially when the application is critical for the
users you should test it in a professional way, or when the application is
technically seen as complex.
Quote for the day:
"Leaders think and talk about the
solutions. Followers think and talk about the problems." --
Brian Tracy
No comments:
Post a Comment