AI and Overcoming User Resistance
If users are concerned, and even worried about AI, it could lead to user
resistance, which is a dynamic that IT pros are familiar with from their history
of implementing new systems that alter business processes, require employee
retraining, and may even change employee jobs. So, are process change and user
resistance any different when you introduce AI? I would argue yes. You’re not
just retraining an employee on a new set of steps for processing an invoice or
taking an order. You’re actually introducing an automated thinking process into
what an employee has been doing. Now, technology is going to make or recommend
decisions that the employee used to make. This can lead to employees
experiencing a loss of empowerment and control. ... This is exactly the “sweet
spot” that companies (and IT) should aim for with AI projects: an environment
where everyone sees beneficial value from AI, and where no one feels
disenfranchised. This is an achievable environment if users are engaged early in
business process redefinition and in how AI will work.
Eyes everywhere: How to safely navigate the IoT video revolution
Users are rightfully wary of bringing even more cameras into their homes and
offices. The good news is that they, too, can protect their camera-enabled
devices with some simple steps. First, customize. This includes changing default
usernames and passwords, updating the device’s firmware and software, and
staying informed about the latest security threats. This is a simple yet
effective way to create a barrier between yourself and would-be hackers. Next,
take it to the edge. Processing and storing data at the edge instead of the
cloud is another surefire way to protect your endpoints. After all, by storing
the information under your own lock and key, you can be sure about who can
access it and how. Users also benefit from reduced latency by storing the
information closer to home, which is particularly important with heavy video
feeds. Finally, buy trusted brands. Attack surfaces are only as strong as their
weakest link. So, chose companies that have a proven track record when it comes
to privacy and security.
Why HTTP Caching Matters for APIs
In some caching strategies, especially for dynamic resources, the cache can
store not only the complete response but also the individual elements or
changes that make up the response. This approach is known as “delta caching”
or “incremental caching.” Instead of sending the complete response, delta
caching sends only the changes or updates made to the cached version of the
resource. ... Delta caching is particularly useful for scenarios where
resources change frequently, but the changes are relatively small compared to
the complete resource. For example, in a collaborative document editing
application, delta caching can be employed to send only the changes made by a
user to a shared document, instead of sending the entire document every time
it is updated. ... Caching enhances application resilience by reducing the
risk of service disruptions during periods of high demand. By serving cached
responses, even if the backend servers experience temporary performance
issues, the application can continue to respond to a significant portion of
requests from the cache. The caching layer acts as a buffer between the
backend servers and the clients.
Author Talks: How to speak confidently when you’re put on the spot
People become nervous for many reasons. More than 75 percent of people report
being nervous in high-stakes communication, be it planned or spontaneous. Past
experience could be a factor, as well as high stakes and the importance of the
goals you’re trying to achieve. Those of us who study this at an academic
level believe that the nervousness is wired into being human. We see this
across all cultures. We see it develop typically in the early teen years and
progress from there. There’s an evolutionary component to it. One of the most
helpful tips is normalizing the anxiety that you feel. You’re not alone. ...
My anxiety management plan has three steps. The first thing I do is hold
something cold in the palms of my hand before I speak. That cools me down.
Secondly, I say tongue twisters to warm up my voice and also to get myself in
the moment. Third, I remind myself, “I am in service of my audience. I am here
to help them.” That really gets me other-focused rather than self-focused.
That’s my anxiety management plan. I encourage everybody to find a plan that
works for them.
Dell customizes GenAI and focuses on data lakehouse
Being able to fine tune as well as train generative AI is a process that
relies on data, lots and lots of data. For enterprise use cases, that data
isn’t just generic data taken from a public source, but rather is data that an
organization already has in its data centers or cloud deployments and is
likely also spread across multiple locations. To help enable enterprises to
fully benefit from data for generative AI, Dell is building out an open data
lakehouse platform. The data lakehouse concept is one that was originally
pioneered by Databricks, as a way of enabling organizations to more easily
query data stored in cloud object storage based data lakes. The Dell approach
is a bit more nuanced in that it is taking a hybrid approach to data, with a
goal of being able to query data across on-premises as well as mutli-cloud
deployments. Greg Findlen, senior VP data management at Dell explained during
the press briefing that the open data lakehouse will be able to use Dell
storage and compute capabilities as well as multi-cloud storage.
Don’t try running with data before you can walk
In South Africa, data governance tends to be a grudge investment based on
regulatory issues. However, organisations that don’t do the basics well, and
don’t have mature data governance and established frameworks in place, may
well find they are spending on analytics technologies that don’t live up to
expectations. What stands in the way of getting governance right? Firstly,
it’s not easy. It involves all stakeholders across all domains. It may require
a mindset change, and users may need to learn to use new technology. Secondly,
it can be expensive, and it may take time before the organisation sees the
value of it. One of the biggest problems is that the value of data governance
investments is difficult to quantify in monetary terms. ... Data products
should be supported by the entire CDO capability – including the CDO, data
owners and data stewards – as well as IT, to ensure the data products will add
the required business value. Owners and stewards need to identify and curate
the required data for the products, while also ensuring good quality data and
metadata management to make it more usable for broader business.
Yes, Software Development is an Assembly Line, but not Like That
Manufacturing engineers produce assembly lines and manufacturing processes
that can produce those units of value. Software engineers are largely the
same, also producing systems and processes that deliver units of value. The
manufactured widget of software is actually the discrete user interactions
with those features and pieces of software, not the features themselves. The
assembly line in software engineering isn’t, as many think, the engineers
producing features. ... Systems like Total Quality Management, which are
focused on driving a cultural mindset of continuous improvement and an entire
company focused on providing very low defect rates, easily translate to
customer satisfaction in software organizations. Just to pick on TQM a bit, if
we were to adapt it to software, we would focus on the number of times users
are impacted by a defect more than the number of open bugs. Instead of
tracking the number of defects and searching for more, we would be tracking
the number of users who either failed to receive the promised value from the
product or had severely diminished value.
Cloud Services Without Servers: What’s Behind It
“The basic idea of serverless computing has been around since the beginning of
cloud computing. However, it has not become widely accepted,” explains Samuel
Kounev, who heads the JMU Chair of Computer Science II (Software Engineering).
But a shift can currently be observed in the industry and in science, the
focus is increasingly moving towards serverless computing. A recent article in
the Communications of the ACM magazine of the Association for Computing
Machinery (ACM) deals with the history, status and potential of serverless
computing. Among the authors are Samuel Kounev and Dr. Nikolas Herbst, who
heads the JMU research group “Data Analytics Clouds”. ... “NoOps” is the
first, which stands for “no operations”. This means, as described above, that
the technical server management, including the hardware and software layers,
is completely in the responsibility of the cloud provider. The second
principle is “utilisation-based billing”, which means that only the time
during which the customer actively uses the allocated computing resources is
billed.
7 sins of software development
Some software development issues can be fixed later. Building an application
that scales efficiently to handle millions or billions of events isn’t one of
them. Creating effective code with no bottlenecks that surprise everyone when
the app finally runs at full scale requires plenty of forethought and
high-level leadership. It’s not something that can be fixed later with a bit
of targeted coding and virtual duct tape. The algorithms and data structures
need to be planned from the beginning. That means the architects and the
management layer need to think carefully about the data that will be stored
and processed for each user. When a million or a billion users show up, which
layer does the flood of information overwhelm? How can we plan ahead for those
moments? Sometimes this architectural forethought means killing some great
ideas. Sometimes the management layer needs to weigh the benefits with the
costs of delivering a feature at scale. Some data analysis just doesn’t work
well at large scale. Some formulas grow exponentially with more
users.
Organizations grapple with detection and response despite rising security budgets
For better understanding and evaluation, the study was able to categorize the
responding organizations into "secure creators" and "prone enterprises." The
grouping was done on the basis of the number of solutions used, the adoption
of emerging technologies, and the use of technologies to simplify their
automation environments. The study found that secure creators are more
satisfied with their approach to cybersecurity, experience fewer cybersecurity
incidents, and can detect and respond to incidents quicker. About 70% of them
are early adopters of emerging technologies. The secure creators are also more
focused on extracting the most value from specific advanced solutions, with
62% already using or in the late stages of implementing AI/ML solutions, as
compared to only 45% of the prone enterprises. "When it comes to technology,
the more clutter an organization has in its armory, the harder it is to pick
up signals and get on top of issues quickly," Watson said.
Quote for the day:
"You’ll never achieve real success
unless you like what you’re doing." -- Dale Carnegie
No comments:
Post a Comment