Why Pre-Skilling, Not Reskilling, Is The Secret To Better Employment Pipelines
In a landscape where the relevance of skills evolves, Zaslavski says that
organizations should focus on selecting and advancing individuals based on their
potential for learning skills like critical thinking and resiliency, instead of
focusing on hard skills like coding. ... “By concentrating on these fundamental
elements, as opposed to current technical proficiency or past work history,
organizations position themselves with an agile and future-ready workforce. In
this light, pre-skilling should be an integral part of employers’ talent
strategy pre and post-hiring, from sourcing and recruiting to career pathing and
employee engagement.” ... She points to areas like understanding if a potential
or existing employee has the EQ and social skills needed to perform as part of a
group. Or whether they have the curiosity and analytical intelligence needed to
learn new hard skills as well as the ambition and work ethic to achieve results.
“When people have learning ability, drive, and people skills, they will probably
develop new skills faster than others,” she says.
Agile is a concept we all continuously talk about, but what is it really?
Empiricism, teams, user stories, iterations; they are all examples of tools that
we use in Agile, but they are not its purpose. Agile is about empowering people
to take control of their environment and give them complete freedom to discover
how to use available tools in the most effective way. And this applies to the
why too. People adopt Agile to increase efficiency, transparency, velocity,
predictability, quality. But again all these are a result of Agile, not its
goal. It is the mindset that makes it all possible. That is why it is “People
and interactions above processes and tools”. To illustrate this, think about
empiricism itself. Try introducing empiricism into an organisation mired in a
culture of fear and control, and it doesn’t work, no matter what you do. You
can’t force empiricism. People are too busy evading blame and manipulating
information. Think about it, how often do people complain that the retrospective
doesn’t deliver anything? Retrospectives where people just complain and nothing
changes?
What Will It Take to Adopt Secure by Design Principles?
What does the future of secure by design adoption look like? CISA is continuing
its work alongside industry partners. “Part of our strategy is to collect data
on attacks and understand what that data is telling us about risk and impact and
derive further best practices and work with companies, and really other nations,
to adopt these principles,” Zabierek shares. International collaboration on
secure by design is reflected not only in this CISA initiative but also the
Guidelines for Secure AI System Development. CISA and the UK’s National Cyber
Security Centre (NCSC) led the development of those guidelines, and 16 other
countries have agreed to them. But like the Secure by Design initiative, this
framework is also non-binding. A software manufacturer’s timeline for adopting
secure by design principles will depend on its appetite, resources and the
complexity of its products. But the more demand from government and consumers,
the more likely adoption will happen. Right now, CISA has no plans to track
adoption. “We're more focused on collaborating with industry so that we can
understand best practices and recommend further better guidelines,” says
Zabierek.
Mastering the art of motivation
Once you’ve helped employees connect their dots, the best way to further
motivate them is also the cheapest, easiest, and has the fewest unintended
consequences. Compliment them on a job well done, whenever they’ve done a job
well enough to be worth noting. Sure, there are wrong ways to use compliments
as motivators. First and foremost the employee you’re complimenting must value
your opinion. If they don’t they’ll write off your compliment as just so much
noise. Second, a compliment from you should not be an easy compliment to earn.
“I really like your belt,” isn’t going to inspire someone to work inventively
and late. Third, with few exceptions compliments should be public. There’s
little reason for you to be embarrassed about being pleased with someone’s
efforts. With one caveat: Usually you’ll have one or two in your organization
who routinely perform exceptionally well, but also one or two who are plodders
— good enough and steady enough to keep around; not good enough or steady
enough to earn your praise. Find a way to compliment them in public anyway —
perhaps because you prize their reliability and lack of temperament.
Do you need GPUs for generative AI systems?
GPUs greatly enhance performance, but they do so at a significant cost. Also,
for those of you tracking carbon points, GPUs consume notable amounts of
electricity and generate considerable heat. Do the performance gains justify
the cost? CPUs are the most common type of processors in computers. They are
everywhere, including in whatever you’re using to read this article. CPUs can
perform a wide variety of tasks, and they have a smaller number of cores
compared to GPUs. However, they have sophisticated control units and can
execute a wide range of instructions. This versatility means they can handle
AI workloads, such as use cases that need to leverage any kind of AI,
including generative AI. CPUs can prototype new neural network architectures
or test algorithms. They can be adequate for running smaller or less complex
models. This is what many businesses are building right now (and will be for
some time) and CPUs are sufficient for the use cases I’m currently hearing
about. CPUs are more cost-effective in terms of initial investment and power
consumption for smaller organizations or individuals who have limited
resources.
How to create an AI team and train your other workers
Building an genAI team requires a holistic approach, according to Jayaprakash
Nair head of Machine Learning, AI and Visualization at Altimetrik, a digital
engineering services provider. To reduce the risk of failure, organizations
should begin by setting the foundation for quality data, establish “a single
source of truth strategy,” and define business objectives. Building a team
that includes diverse roles such as data scientists, machine learning
engineers, data engineers, domain experts, project managers, and
ethicists/legal advisors is also critical, he said. “Each role will contribute
unique expertise and perspectives, which is essential for effective and
responsible implementation,” Nair said. "Management must work to foster
collaboration among these roles, help align each function with business goals,
and also incorporate ethical and legal guidance to ensure that projects adhere
to industry guidelines and regulations." ... It's also important to look for
people who like learning new technology, have a good business sense, and
understand how the technology can benefit the company.
Data is the missing piece of the AI puzzle. Here's how to fill the gap
Companies looking to make progress in AI, says Labovich, must "strike a
balance and acknowledge the significant role of unstructured data in the
advancement of gen AI." Sharma agrees with these sentiments: "It is not
necessarily true that organizations must use gen AI on top of structured data
to solve highly complex problems. Oftentimes the simplest applications can
lead to the greatest savings in terms of efficiency." The wide variety of data
that AI requires can be a vexing piece of the puzzle. For example, data at the
edge is becoming a major source for large language models and repositories.
"There will be significant growth of data at the edge as AI continues to
evolve and organizations continue to innovate around their digital
transformation to grow revenue and profits," says Bruce Kornfeld, chief
marketing and product officer at StorMagic. Currently, he continues, "there is
too much data in too many different formats, which is causing an influx of
internal strife as companies struggle to determine what is business-critical
versus what can be archived or removed from their data sets."
3 ways to combat rising OAuth SaaS attacks
At their core, OAuth integrations are cloud apps that can access data on
behalf of a user, with a defined permission set. When a Microsoft 365 user
installs a MailMerge app to their Word, for example, they have essentially
created a service principal for the app and granted it an extensive permission
set with read/write access, the ability to save and delete files, as well
as the ability to access multiple documents to facilitate the mail merge. The
organization needs to implement an application control process for OAuth apps
and determine if the application, like in the example above, is approved or
not. ... Security teams should view user security through two separate lenses.
The first is the way they access the applications. Apps should be configured
to require multi-factor authentication (MFA) and single sign-on (SSO). ...
Automated tools should scan the logs and report whenever an OAuth-integrated
application is acting suspiciously. For example, applications that display
unusual access patterns or geographical abnormalities should be regarded as
suspicious.
Cloud cost optimisation: Strategies for managing cloud expenses and maximising ROI
Instead of employing manual resources, streamlining cloud optimisation through
automation could bring enhanced resource savings to the table. The auto-scaling
program offered by Amazon Web Services (AWS) is a shining example of how firms
can effectively streamline their cloud optimisation in a short time. The program
also enables swift optimisation in response to the changing resource
requirements of systems and servers. ... At the planning stage, firms need to
justify the cloud budget and ensure that unexpected spending is reduced to the
minimum. The same approach has to be followed in the building, deployment, and
control phases so that any unexpected rise in budgets can be adjusted promptly
without throwing the entire financial control into a tizzy. All these steps will
help organisations develop a culture of cost-conscious cloud adoption and help
them perform optimally while keeping costs in check. ... Incorporating cloud
cost optimisation tools is a strategic approach for organisations to streamline
expenditures and enhance ROI.
Pull Requests and Tech Debt
The biggest disadvantage of pull requests is understanding the context of the
change, technical or business context: you see what has changed without
necessarily explaining why the change occurred. Almost universally, engineers
review pull requests in the browser and do their best to understand what’s
happening, relying on their understanding of tech stack, architecture,
business domains, etc. While some have the background necessary to mentally
grasp the overall impact of the change, for others, it’s guesswork,
assumptions, and leaps of faith….which only gets worse as the complexity and
size of the pull request increases. [Recently a friend said he reviewed all
pull requests in his IDE, greatly surprising me: first I’ve heard of such
diligence. While noble, that thoroughness becomes a substantial time
commitment unless that’s your primary responsibility. Only when absolutely
necessary do I do this. Not sure how he pulls it off!] Other than those good
samaritans, mostly what you’re doing is static code analysis: within the
change in front of you, what has changed, and does it make sense? You can look
for similar changes, emerging patterns that might drive refactoring, best
practices, or others doing similar.
Quote for the day:
"All leadership takes place through
the communication of ideas to the minds of others." --
Charles Cooley
No comments:
Post a Comment