The role of the database in edge computing
In a distributed architecture, data storage and processing can occur in multiple
tiers: at the central cloud data centers, at cloud-edge locations, and at the
client/device tier. In the latter case, the device could be a mobile phone, a
desktop system, or custom-embedded hardware. From cloud to client, each tier
provides higher guarantees of service availability and responsiveness over the
previous tier. Co-locating the database with the application on the device would
guarantee the highest level of availability and responsiveness, with no reliance
on network connectivity. A key aspect of distributed databases is the ability to
keep the data consistent and in sync across these various tiers, subject to
network availability. Data sync is not about bulk transfer or duplication of
data across these distributed islands. It is the ability to transfer only the
relevant subset of data at scale, in a manner that is resilient to network
disruptions. For example, in retail, only store-specific data may need to be
transferred downstream to store locations.
Data management in the digital age
To ensure effective data management, organisations can adopt various strategies
and tactics that have proven their worth in modern organisations. The first of
these is a comprehensive risk assessment. Performing risk assessments regularly
will ensure that you can identify and prioritise vulnerabilities before they
become gaping security holes that can be exploited. Ongoing risk assessments
should be bolstered by robust and current security and data management policies
that reflect the threat landscape. “You also need to implement employee training
and communication because humans are often the weakest link in even the most
advanced security system,” says Grimes. “You must ensure that security is
understandable and accessible and that the lessons are driven home through
constant reminders and training programmes. All it takes is one click to bring
down the most sophisticated security system on the planet.” It’s also important
to collaborate with vendors and partners that understand the security landscape
and have the tools and expertise required to support the organisation’s security
posture.
Coaching IT pros for leadership roles
You can teach someone to code, manage money, and complete the tasks of being a
manager. But teaching is limited. To develop a leader, you have to coach them to
become someone who can make decisions on their own, communicate well, and plan
strategically. But the transition from teacher to coach can be challenging. ...
Then practice what Davis calls the “ask first, tell second” method of coaching.
“Ask them what’s exciting about this. Then ask what’s scary?” And, since the
core skill of coaching is listening, “give them the time and space to answer and
listen to what they say,” she says. They might not want to give up the thing
they are good at to learn something hard. They might feel jealous of team
members who get to keep their hands on the technology. They might fear that
others aren’t good enough to do the work they’ve been doing. And they might not
yet see the benefits of a leadership role. In the “tell” portion, point out the
influence they will have on larger issue in the company, the essential role of
managers on the team, the pleasure of helping people grow into larger careers,
and how this will give them a seat at the table.
4 characteristics of enterprise application platforms that support digital transformation
The need to deploy applications in various different cloud
infrastructures—public cloud, private cloud, physical, virtual, and edge—based
on business needs is a key requirement for most established enterprises. As more
and more business value is created with the Internet of Things (IoT), edge
computing, and artificial intelligence and machine learning (AI/ML), the need to
deploy applications across these cloud providers from devices, edge datacenters,
on-prem, and colocation providers to the public cloud ecosystem is growing
exponentially. For an enterprise, a baseline application platform that can be
deployed on all these cloud provider types is essential, if not vital, to
support current and future business needs. Another aspect to consider is the
growth and distribution of enterprise data. As the famous saying goes, "data is
the new oil," and the amount and pace of enterprise data growth are
unprecedented. Enterprises are looking at options to leverage this data to
create meaningful business insights.
How to Combine RPA and BPM the Smart Way
Seamless digital integration is more than just cobbling together the best
digital solutions on the market. How these advanced technologies interact makes
a huge difference. Technologies designed to work together are crucial to
achieving the productivity gains promised by digital transformation. With a
comprehensive platform, organizations don’t need to worry about building
integrations because the platform already includes them. Moreover, a single
platform is easier to buy and manage because it comes from the same licensor
rather than going through the procurement process with multiple suppliers.
Companies need to take care when determining which IA platform to adopt. The
benefits of a comprehensive platform are increasingly recognized by vendors and
their customers, pushing suppliers to put together multifeatured automation
platforms. If companies choose a platform insufficient for their needs, they
face reworking costs down the road. Nevertheless, suppose organizations have
already taken on technical debt and are looking to rework their digital
transformation journey.
5 Technologies Powering Cloud Optimization
Cloud cost management is a critical component of optimization that helps
organizations to monitor and manage their cloud spend. The goal is to ensure
that organizations are only paying for the cloud resources they actually need
and that they are using those resources efficiently. ... Autoscaling is a
technology that enables organizations to automatically scale their cloud
resources up or down as needed to meet changing demands. The goal of autoscaling
is to ensure that organizations always have the right amount of resources to
support their workloads while minimizing costs and ensuring that their systems
are always available when they are needed. Autoscaling works by monitoring the
performance and usage of cloud resources, such as compute instances, storage and
network traffic, and automatically adjusting the size of those resources to meet
changing demand. ... An API gateway is a server that acts as an intermediary
between an application and one or more microservices. The API gateway is
responsible for request routing, composition and protocol translation, which
enables microservices to communicate with each other securely and efficiently.
Streaming Data Management for the Edge
Managing data at the edge is actually quite easy. What’s hard is how you
monetize it. How do you get value from it? How do you take the data that’s
streaming into the organization and analyze it, inference on it, and act on it
as it’s coming in? How do you use this data to help your customer or
stakeholder Think about a retailer who’s trying to do in-store queue
management, trying to identify situations where customers are abandoning their
carts because the lines are too long, where you’re trying to watch for theft,
for shrinkage. It isn’t the management of the data that’s as big a challenge.
It is the ability to take that data and make better operational decisions at
the point of customer interaction or operational execution. That’s the
challenge. And so, we need a different mental frame as well as a different
data and analytics architecture that is conducive to the fact that this data
that’s coming in, in real time, has value as it’s coming in. Historically, in
batch worlds, we didn’t care about real-time data. The data came in.
DevOps isn’t dead: How platform engineering enables DevOps at scale
Platform engineers could automate almost all this work by building it into an
IDP. For example, instead of manually setting up Git repositories, developers
can request a repository from the IDP, which would then create it. The IDP
would then assign the right user group and automatically integrate the correct
CI/CD template. The same pattern applies to creating development environments
and deploying core infrastructure. The IDP acts as a self-service platform for
developers to request services and apply configurations, knowing security best
practices and monitoring are built in by default. IDPs can also automatically
set up projects in project tracking software and documentation templates. As
you can see, platform engineers don’t replace DevOps processes. They enhance
them by building a set of standardized patterns into a self-service internal
development platform. This removes the burden of project initialization so
teams can start providing business value immediately, rather than spending the
first few weeks of a project setting up and working through teething
issues.
The Dos and Don‘ts of API Monetization
Before diving into best practices and antipatterns, let’s go over the core
technical requirements for enabling API monetization:Advanced metering:
Because different customers may have different levels of access to APIs under
varying pricing plans, it’s critical to be able to manage access to API
requests in a highly granular way, based on factors like total allowed
requests per minute, the time of day at which requests can be made and the
geographic location where requests originate. Usage tracking: Developers must
ensure that API requests can be measured on a customer-by-customer basis. In
addition to basic metrics like total numbers of requests, more complex
metrics, like request response time, might also be necessary for enforcing
payment terms. Invoicing: Ideally, invoicing systems will be tightly
integrated with APIs so that customers can be billed automatically. The
alternative is to prepare invoices manually based on API usage or request
logs, which is not a scalable or efficient approach. Financial analytics:
The ability to track and assess the revenue generated by APIs in real time is
essential to many businesses that sell APIs.
How to unleash the power of an effective security engineering team
Security engineering teams should be able to build and operate the services
they produce. You build it. You run it. This level of ownership within a group
is vital from a technical competence standpoint and culturally, setting the
tone around accountability. Technically speaking, a team that can own its
services will proficiently manage infrastructure, CI/CD tooling, security
tooling, application code, deployments, and the operational telemetry emitted
from a service. In addition, the skills backing all that support as a team are
likely to be highly transferable in support of other groups across the
organization. Teams that understand, embrace, and optimize for DevX are likely
more favored. Beyond that, it will have a particular focus on eliminating
friction. Friction makes things take longer and cost more, creates longer
learning cycles, and can lead to frustration. Less friction will lead to
things generally running much smoother. Sometimes friction is necessary and
should be intentional. An example is a forced code review on critical code
before it's merged.
Quote for the day:
"Leadership is liberating people to
do what is required of them in the most effective and humane way
possible." -- Max DePree
No comments:
Post a Comment