Daily Tech Digest - April 23, 2023

Shadow IT, SaaS Pose Security Liability for Enterprises

All issues surrounding shadow IT can be traced back to an organization's lack of visibility. An unmanaged software stack gives IT teams zero insight into how sensitive company information is being used and distributed. Since these tools are not vetted properly and are left unmonitored, the data they store is not adequately protected by most organizations. This creates the perfect framework for hackers to easily seize important data, such as confidential financial records or personal details. Critical corporate data is at risk because most, if not all, SaaS tools require corporate credentials and access to an organization's internal network. A recent survey by Adaptive Shield and CSA actually shows that in the past year alone, 63% of CISOs have reported security incidents from this type of SaaS misuse. As stated prior, the recurring theme that many businesses are experiencing with shadow IT is the risk associated with a data breach. However, it is equally important to realize the potential industry scrutiny that businesses face and the penalties they receive from regulators because of sprawling shadow IT.


The Cyber Resilience Act Threatens Open Source

At the heart of the issue is the need for organizations to self-certify their compliance with the act. Since open source is often maintained by a small loose-knit group of contributors, it is difficult to see how this will work. Here’s the concern in a nutshell. Suppose you write up a cool little C++ program for your own use. You aren’t a company, and you didn’t do it for profit. Wanting to share your work, you post your program on GitHub with an open source license. ... In fact, it is even encouraged. That’s how open source works. The problem is when the GRID database has a problem that causes a data breach. The problem turns out to be a vulnerability in your code. Under the proposed law, it is possible you’d be left holding the bag for a large sum of money thanks to your generous hobby project that didn’t earn you a cent. The situation is even more complex if your code has multiple contributors. Was it your code that caused the breach or the other developer’s code? Who “owns” the project? Are all contributors liable? 


Why Your Personal Brand Needs A Niche: The Benefits Of Specialization

Finding your niche also allows you to focus your energy and resources on a specific area, reducing the chances of you feeling overwhelmed trying to be everything to everyone. A niche provides a compass for your efforts, ensuring that the work you do aligns with your skills and interests. While being more specific can feel uncomfortable, it ultimately enables employers and clients to understand the specific value you offer. In the early days of my consultancy, I found myself saying yes to everything, including some speaking engagements that fell outside of my immediate area of expertise or taking on clients who demanded a lot of additional effort on my part to cover the entire scope of the services they sought that went beyond my offerings. Over time, I defined clearer boundaries around my scope of services. I also tried to more explicitly communicate which services I did not offer or consider within my area of expertise. When you niche down and clearly define your area of focus, it enables you to make clearer career choices, only pursuing opportunities that allow you to reinforce your positioning.


Former Microsoft CIO Jim DuBois Dishes On AI and Future of IT

One of the things we have to figure out in the future of work is that a huge part of the population isn’t able to take advantage of this hybrid and remote opportunity. And what do we do for them? Do we end up getting to a place where people are picking jobs based on whether they can work remote or not? And are we going to have to compensate people differently for being on- or off site? That’s something that hasn’t been solved … There are a lot of companies that haven’t figured out how to keep the collaboration and the culture going in a remote workforce. So they just said, “Oh, we’ve got to get people back into the office do that.” I would say, “Or, you could figure out how to collaborate and keep your culture going with remote.” ... I’m a believer in carrot rather than stick incentives. Rather than compliance requirements, we need to focus on the fact that there’s so much value in ESG and in having a more diverse team. We need to focus more on the incentives and less on the “because we told you to” part. 


Using generative AI to understand customers

In terms of better understanding customers, generative AI is really effective in summarising information. Companies are already using the technology to create auto-summaries of market research reports, eliminating the need for having to precis reports manually. Going forward, there is potential to expand this use case to summarise large volumes of information quickly and efficiently in order to provide concise answers to key business questions. ... Generative AI can also make it easier for all stakeholders to access market research without having to involve an insights manager each time, thereby removing access barriers and facilitating the seamless integration of consumer insights into daily operations. Moreover, generative AI can help to address common concerns associated with all stakeholders accessing market research, such as non-research workers asking the wrong questions. By prompting relevant questions related to their search query, the technology can help those without research backgrounds to ask better questions, ultimately leading to more accurate and useful customer information.


Optimizing SaaS With Automation and Zero-Touch IT

While it may seem daunting, the journey to achieving zero-touch IT is not out of reach. It does require investment in time, technology and people, however. And once you get there the efficiencies will be apparent. Let’s break these benefits down by category. Zero-touch IT helps companies manage their software applications much more effectively. IT groups have historically gotten bogged down in the manual execution of tasks that are complicated and tedious, despite being basic and common. Two processes cited as top concerns for IT professionals, onboarding new employees and offboarding departing employees, are concrete examples. But managing the user life cycle of an employee doesn’t just start at onboarding and stop at offboarding. Many changes take place during an employee’s time at the organization—promotions, changes in departments, password resets, new project assignments, etc. And every single time an event like this occurs, some type of action, like giving or revoking access to new files, elevating access rights or taking security steps to prevent unauthorized access is required. 


Cyber insurer launches InsurSec solution to help SMBs improve security, risk management

InsurSec solutions are new, emerging offerings, but the concept behind them and its potential to add value to involved parties is something being recognized more widely, particularly for SMBs and organizations struggling with an adverse blend of low maturity and cost constraints. “I think the insurance market is recognizing that their future offering in this space has to grow beyond simple loss protection,” Paul Watts, distinguished analyst at the Information Security Forum, tells CSO. “Providing complementary services to help organizations with proactive and reactive management of cyber risk could also help foster stronger relationships between insurer and client.” Both parties stand to benefit here – by engaging in this way, risk is better (and jointly) managed, Watts says. Insurers are mitigating losses, and clients are drawing down on capabilities that were previously too expensive for consideration and could see lower premiums as a result. 


Novel Technique Exploits Kubernetes RBAC to Create Backdoors

Researchers at Cybersecurity firm Aqua Security said they recorded and analyzed an attack on its Kubernetes honeypots that used the RBAC system to gain persistence. Kubernetes Role-based access control or RBAC is a method of restricting network access based on the roles of individual users within an organization. In their honeypots, the researchers exposed AWS access keys in various locations on the cluster and received a beacon indicating that the access keys were used by the attacker to try and gain further access to the cloud service provider account and leverage the attack to steal more resources and data. "The findings are significant as they shed light on the risks of misconfigurations and how even large organizations can overlook the importance of securing their clusters, leaving them vulnerable to potential disasters with just one mistake," according to researchers. The large-scale campaign dubbed RBAC Buster allowed attackers to gain initial access by exploiting a misconfigured API server that allowed unauthenticated requests from anonymous users with privileges.


How does blockchain fit into today’s enterprise?

According to Bennett, outside of the financial services sector, “we are still not at the point where we can confidently say that blockchain really is delivering the business value that people are looking for, simply because it is incredibly difficult to actually set up a blockchain network that at the end of the day really needs all those blockchain features,” she said. Stack Overflow recently conducted a survey to find out what new technologies made it past what Gartner refers to as the hype cycle. Many new technologies can stir up excitement in the industry, but not all will actually see widespread adoption. They ranked technologies on a scale of experimental to proven and positive to negative impact. On a scale from zero (experimental) to 10 (proven), blockchain technology came in towards the middle at 4.8. And on a scale from zero (negative impact) to 10 (positive impact), it received a score of 5.3. Another survey by Foundry echoes these sentiments. It found that 51% of respondents were not interested in adopting blockchain technology within their organization.


Navigating The Future Of Cyber

Cyber is about more than protecting information—risk management, incident response planning and threat intelligence can often be directly correlated to increasing trust within businesses. Many organizations recognize the importance of prioritizing cybersecurity and have reported significant improvements in trust and efficiency through their efforts. In Deloitte Global’s latest Future of Cyber Survey, almost 70% of businesses that were identified as highly mature organizations when it comes to cyber believe cybersecurity has positively impacted their organization's reputation and productivity. From robust cyber planning across the business and effective board-level engagement—the high cyber performers recognize the importance of cyber responsibility and involvement across the whole organization. Beyond looking across the organization, cyber planning strategies should be regularly reviewed and updated to protect trust in the organization.



Quote for the day:

"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward

Daily Tech Digest - April 22, 2023

What CIOs need to become better enablers of sustainability

Key to this is a greater understanding of business operations and their production of CO2, or use of unsustainable practices and resources. As with most business challenges, data is instrumental. “Like anything, the hard work is the initial assessment,” says CGI director of business consulting and CIO advisor Sean Sadler. “From a technology perspective, you need to look at the infrastructure, where it’s applied, how much energy it draws, and then how it fits into the overall sustainability scheme.” CIOs who create data cultures across organizations enable not only sustainable business processes but also reduce reliance on consultancies, according to IDC. “Organizations with the most mature environmental, social, and governance (ESG) strategies are increasingly turning to software platforms to meet their data management and reporting needs,” says Amy Cravens, IDC research manager, ESG Reporting and Management Technologies. 


How to implement observability in your IT architecture

Although it has grown out of the APM market, observability is more than just APM with a new name and marketing approach. The most crucial factor differentiating observability from APM is that observability includes three distinct monitoring approaches—tracing, metrics, and logs—while APM provides tracing alone. By collecting and aggregating these various types of data from multiple sources, observability offers a much broader view of the overall system and application health and performance, with the ability to gain much deeper insights into potential performance issues. Another important distinction is that open source tools are the foundation of observability, but not APM. While some APM vendors have recently open-sourced the client side of their stack, the server side of all the popular commercial APM solutions is still proprietary. These distinctions do not mean that observability and APM are unconnected. Application performance management can still be an important component of an observability implementation.


How Conversational Programming Will Democratize Computing

The scope of a conversation must mirror a human “mental stack”, not that of a computer. When I use a conventional Windows interface on my laptop, I am confronted with the computer’s file system which is presented as folders and files. That effort is reversed in conversational programming — the LLM system has to work with my limited human cognition facilities. This means creating things in response to requests, and reporting outcomes at the same level that I asked for them. Returning arcane error codes in response to requests will immediately break the conversation. We have already seen ChatGPT reflect on its errors, which means a conversation should retain its value for the user. ... The industrialization of LLMs is the only thing we can be reasonably sure about, because the investment has already been made. However, the rapid advancement of GPT systems will likely run aground in the same areas that other large-scale projects have in the past. The lack of collaboration between large competitors has eroded countless good ideas that depended on interoperability.


Dark Side of DevOps - the Price of Shifting Left and Ways to Make it Affordable

On the one hand, not having a gatekeeper feels great. Developers don’t have to wait for somebody’s approval - they can iterate faster and write better code because their feedback loop is shorter, and it is easier to catch and fix bugs. On the other hand, the added cognitive load is measurable - all the tools and techniques that developers have to learn now require time and mental effort. Some developers don’t want that - they just want to concentrate on writing their own code, on solving business problems. ... However, as companies grow, so does the complexity of their IT infrastructure. Maintaining dozens of interconnected services is not a trivial task anymore. Even locating their respective owners is not so easy. At this point, companies face a choice - either reintroducing the gatekeeping practices that negatively affect productivity, or to provide a paved path - a set of predefined solutions that codifies the best practices, and takes away mental toil, allowing developers to concentrate on solving business problems.


Why generative AI will turbocharge low-code and no-code development

Generative AI's integration into low-code and no-code platforms will lower the barriers to adoption of these development environments in enterprises, agreed John Bratincevic, principal analyst at Forrester. “The integration of generative AI will see adoption of low-code by business users, since the learning curve for getting started on developing applications will be even lower,” Bratincevic said. The marriage of generative AI with low-code and no-code platforms will aid professional developers as well, analysts said. ... “These generative AI coding capabilities will be most helpful for developers working on larger projects that are looking for shortcuts to support commoditized or common sense requests,” said Hyoun Park, principal analyst at Amalgam Insights. “Rather than searching for the right library or getting stuck on trying to remember a specific command or term, GPT and other similar generative AI tools will be able to provide a sample of code that developers can then use, edit, and augment,” Park said.


Start with Sound Policies, Then Customize with Required Exceptions

Number one is our culture of security, not just within the cybersecurity organization, but broader than the cybersecurity organization looking at the entire Providence org – instilling security practices into our business practices, or business processes, instilling security mindset into our caregivers, because our caregivers truly are on the front lines of the cybersecurity battlefield. They’re the ones that are receiving phishing emails, they’re the ones that are making decisions on what they click on, what they don’t click on, interactions with our clinical device vendors, or clinical application vendors. They’re making risk choices every day. So informing them about security, training them on security, and instilling security culture – broader than just the security organization – has been a real focus of ours this year. Another focus of ours has been on implementing or continuing the journey, I should say, toward a zero trust approach here at Providence. And when I say zero trust, a lot of people use the term, “never trust, always verify.” 


Leap of Faith: Building Trust and Commitment with Engineering

Leaping before you’re ready will result in disappointment if not outright disaster. It is important to understand what knowledge and muscle is required along the various stages that lead to full Engineering Trust & Autonomy. Each organization must determine this trust criteria for themselves, however, it is imperative to recognize starting from the future end-state goal and working backwards promotes the greatest benefit (e.g., innovative inspirational differentiation). To be most effective, seek out Leading teams already doing this in your organization. They do exist, but they most likely are considered one-offs, rogue, and exceptions to the internal norm. Good. That’s what you’re looking for! ... Once trust criteria is shared and definitive trust-boundaries are in place, the hardest piece of this puzzle must be executed: Executive Leadership and Individual Commitment Putting your strategy into play takes time and during that time doubts will creep in. This is normal, however, there are a few tricks to leverage that ensure you stay the course


Used Routers Often Come Loaded With Corporate Secrets

The big danger is that the wealth of information on the devices would be valuable to cybercriminals and even state-backed hackers. Corporate application logins, network credentials, and encryption keys have high value on dark web markets and criminal forums. Attackers can also sell information about individuals for use in identity theft and other scamming. Details about how a corporate network operates and the digital structure of an organization are also extremely valuable, whether you're doing reconnaissance to launch a ransomware attack or plotting an espionage campaign. For example, routers may reveal that a particular organization is running outdated versions of applications or operating systems that contain exploitable vulnerabilities, essentially giving hackers a road map of possible attack strategies. ... Since secondhand equipment is discounted, it would potentially be feasible for cybercriminals to invest in purchasing used devices to mine them for information and network access and then use the information themselves or resell it.


ChatGPT may hinder the cybersecurity industry

ChatGPT’s AI technology is readily available to most of the world. Therefore, as with any other battle, it’s simply a race to see which side will make better use of the technology. Cybersecurity companies will need to continuously combat nefarious users who will figure out ways to use ChatGPT to cause harm in ways that cybersecurity businesses haven’t yet fathomed. And yet this fact hasn’t deterred investors, and the future of ChatGPT looks very bright. With Microsoft investing $10 billion in Open AI, it’s clear that ChatGPT’s knowledge and abilities will continue to expand. For future versions of this technology, software developers need to pay attention to its lack of safety measures, and the devil will be in the details. ChatGPT probably won’t be able to thwart this problem to a large degree. It can have mechanisms in place to evaluate users’ habits and home in on individuals who use obvious prompts like, “write me a phishing email as if I’m someone’s boss,” or try to validate individuals’ identities. Open AI could even work with researchers to train its datasets to evaluate when their text has been used in attacks elsewhere.


A New Era of Natural Language Search Emerges for the Enterprise

Due to the statistical nature of their underlying technology, chatbots can hallucinate incorrect information, as they do not actually understand the language but are simply predicting the next best word. Often, the training data is so broad that explaining how a chatbot arrived at the answer it gave is nearly impossible. This “black box” approach to AI with its lack of explainability simply will not fly for many enterprise use cases. Welsh gives the example of a pharmaceutical company that is delivering answers to a healthcare provider or a patient who visits its drug website. The company is required to know and explain each search result that could be given to those asking questions. So, despite the recent spike in demand for systems like ChatGPT, adapting them for these stringent enterprise requirements is not an easy task, and this demand is often unmet, according to Welsh. ... Welsh predicts the companies that will win during this new era of the enterprise search space are those that had the foresight to have a product on the market now, and though the competition is currently heating up, some of these newer companies are already behind the curve. 



Quote for the day:

"Leaders must be good listeners. It's rule number one, and it's the most powerful thing they can do to build trusted relationships." -- Lee Ellis

Daily Tech Digest - April 21, 2023

A team of ex-Apple employees wants to replace smartphones with this AI projector

It's a seamless blend of technology and human interaction that Humane believes can extend to daily schedule run-downs, seeing map directions, and receiving visual aids for cooking or when fixing a car engine -- as suggested by the company's public patents. The list goes on. Chaudhri also demoed the wearable's voice translator which converted his English into French while using an AI-generated voice to retain his tone and timbre, as reported by designer Michael Mofina, who watched the recorded TED Talk before it was taken down. Mofina also shared an instance when the wearable was able to recap the user's missed notifications without sounding invasive, framing them as, "You got an email, and Bethany sent you some photos." Perhaps the biggest draw to Humane and its AI projector is the team behind it. That roster includes Chaudri, a former Director of Design at Apple who worked on the Mac, iPod, iPhone, and other prominent devices, and Bethany Bongiorno, also from Apple and was heavily involved in the software management of iOS and MacOS.


Three issues with generative AI still need to be solved

Generative AI uses massive language models, it’s processor-intensive, and it’s rapidly becoming as ubiquitous as browsers. This is a problem because existing, centralized datacenters aren’t structured to handle this kind of load. They are I/O-constrained, processor-constrained, database-constrained, cost-constrained, and size-constrained, making a massive increase in centralized capacity unlikely in the near term, even though the need for this capacity is going vertical. These capacity problems will increase latency, reduce reliability, and over time could throttle performance and reduce customer satisfaction with the result. The need is for more of a more hybrid approach where the AI components necessary for speed are retained locally (on devices) while the majority of the data resides centrally to reduce datacenter loads and decrease latency. Without a hybrid solution — where smartphones and laptops can do much of the work — use of the technology is likely to stall as satisfaction falls, particularly in areas such as gaming, translation, and conversations where latency will be most annoying.


Exploring The Incredible Capabilities Of Auto-GPT

The first notable application is code improvement. Auto-GPT can read, write and execute code and thus can improve its own programming. The AI can evaluate, test and update code to make it faster, more reliable, and more efficient. In a recent tweet, Auto-GPT’s developer, Significant Gravitas, shared a video of the tool checking a simple example function responsible for math calculations. While this particular example only contained a simple syntax error, it still took the AI roughly a minute to correct the mistake, which would have taken a human much longer in a codebase containing hundreds or thousands of lines. ... The second notable application is in building an app. Auto-GPT detected that Varun Mayya needed the Node.js runtime environment to build an app, which was missing on his computer. Auto-GPT searched for installation instructions, downloaded and extracted the archive, and then started a Node server to continue with the job. While Auto-GPT made the installation process effortless, Mayya cautions against using AI for coding unless you already understand programming, as it can still make errors.


The Best (and Worst) Reasons to Adopt OpenTelemetry

Gathering telemetry data can be a challenge, and with OpenTelemetry now handling essential signals like metrics, traces and logs, you might feel the urge to save your company some cash by building your own system. As a developer myself, I totally get that feeling, but I also know how easy it is to underestimate the effort involved by just focusing on the fun parts when kicking off the project. No joke, I’ve actually seen organizations assign teams of 50 engineers to work on their observability stack, even though the company’s core business is something else entirely. Keep in mind that data collection is just a small part of what observability tools do these days. The real challenge lies in data ingestion, retention, storage and, ultimately, delivering valuable insights from your data at scale. ... At the very least, auto-instrumentation will search for recognized libraries and APIs and then add some code to indicate the start and end of well-known function calls. Additionally, auto-instrumentation takes care of capturing the current context from incoming requests and forwarding it to downstream requests.


OpenAI’s hunger for data is coming back to bite it

The Italian authority says OpenAI is not being transparent about how it collects users’ data during the post-training phase, such as in chat logs of their interactions with ChatGPT. “What’s really concerning is how it uses data that you give it in the chat,” says Leautier. People tend to share intimate, private information with the chatbot, telling it about things like their mental state, their health, or their personal opinions. Leautier says it is problematic if there’s a risk that ChatGPT regurgitates this sensitive data to others. And under European law, users need to be able to get their chat log data deleted, he adds. OpenAI is going to find it near-impossible to identify individuals’ data and remove it from its models, says Margaret Mitchell, an AI researcher and chief ethics scientist at startup Hugging Face, who was formerly Google’s AI ethics co-lead. The company could have saved itself a giant headache by building in robust data record-keeping from the start, she says. Instead, it is common in the AI industry to build data sets for AI models by scraping the web indiscriminately and then outsourcing the work of removing duplicates or irrelevant data points, filtering unwanted things, and fixing typos.


Executive Q&A: The State of Cloud Analytics

Many businesses are trying hard right now to stay profitable during these times of economic uncertainty. The startling takeaway to us was that business and technical leaders see cloud analytics as the tool -- not a silver bullet, but a critical component -- for staying ahead of the pack in the current economic climate. Not only that, organizations need to do more with less and, as it turns out, cloud analytics is not only a wise investment during good economic times, but also in more challenging economic times. Businesses reap benefits from the same solution (cloud analytics) in either scenario. For example, cloud analytics is typically more cost-effective than on-premises analytics solutions because it eliminates the need for businesses to invest in expensive hardware and IT infrastructure. It also offers the flexibility businesses need to quickly experiment with new data sources, analytics tools, and data models to get better insights -- without having to worry about the underlying infrastructure.


AI vs. machine learning vs. data science: How to choose

It's a common topic for organizational leaders—they want to be able to articulate the core differences between AI, machine learning (ML), and data science (DS). However, sometimes they do not understand the nuances of each and thus struggle to strategize their approach to things such as salaries, departments, and where they should allocate their resources. Software-as-a-Service (SaaS) and e-commerce companies specifically are being advised to focus on an AI strategy without being told why or what that means exactly. Understanding the complexity of the tasks you aim to accomplish will determine where your company needs to invest. It is helpful to quickly outline the core differences between each of these areas and give better context to how they are best utilized. ... To decide whether your company needs to rely on AI, ML, or data science, focus on one principle to begin: Identify the most important tasks you need to solve and let that be your guide.


The strong link between cyber threat intelligence and digital risk protection

ESG defined cyber threat intelligence as, “evidence-based actionable knowledge about the hostile intentions of cyber adversaries that satisfies one or several requirements.” In the past, this definition really applied to data on IoCs, reputation lists (e.g., lists of known bad IP addresses, web domains, or files), and details on TTPs. The intelligence part of DRP is intended to provide continuous monitoring of things like user credentials, sensitive data, SSL certificates, or mobile applications, looking for general weaknesses, hacker chatter, or malicious activities in these areas. For example, a fraudulent website could indicate a phishing campaign using the organization’s branding to scam users. The same applies for a malicious mobile app. Leaked credentials could be for sale on the dark web. Bad guys could be exchanging ideas for a targeted attack. You get the picture. It appears from the research that the proliferation of digital transformation initiatives is acting as a catalyst for threat intelligence programs. When asked why their organizations started a CTI program, 38% said “as a part of a broader digital risk protection effort in areas like brand reputation, executive protection, deep/dark web monitoring, etc.”


4 perils of being an IT pioneer

An enterprise-wide IT project is deemed successful only when a team member at the lowest level of the hierarchy adopts it. Ensuring adoption of any new solution is always a challenge. More so a solution based on a new technology. There’s push back from end users because they find the idea of losing power or skills in the face of new technology disconcerting. For any IT leader, crossing this mental inertia is always among the toughest challenges. Moreover, IT leaders have seen many initiatives based on new technologies fail because there was no buy-in from the company’s top leadership. Even if users adopt the new technology, the initially learning curve is often steep, impacting productivity. Most organizations can’t afford or aren’t ready to accept the temporary revenue loss due to the disruption caused by the new technology. Therefore, business and IT leaders must have a clear understanding of the risk/reward principle when rolling out new tech. Buy-in from top management as a top-down mandate can make adoption of new technology easier.


Is Generative AI an Enterprise IT Security Black Hole?

Shutting the door on generative AI might not be a possibility for organizations, even for the sake of security. “This is the new gold rush in AI,” says Richard Searle, vice president of confidential computing at Fortanix. He cited news of venture capital looking into this space along with tech incumbents working on their own AI models. Such endeavors may make use of readily available resources to get into the AI race fast. “One of the important things about the way that systems like GPT-3 were trained is that they also use common crawl web technology,” Searle says. “There’s going to be an arms race around how data is collected and used for training.” That may also mean increased demand for security resources as the technology floods the landscape. “It seems like, as in all novel technologies, what’s happening is the technology is racing ahead of the regulatory oversight,” he says, “both in organizations and the governmental level.”



Quote for the day:

"Our chief want is someone who will inspire us to be what we know we could be." -- Ralph Waldo Emerson

Daily Tech Digest - April 20, 2023

How to succeed as a fractional CIO

A fractional CIO is typically an experienced IT leader who is external to the enterprise yet acts as an accountable leader and extension of the executive team, says Dave Hartman, president of IT management consulting firm Hartman Executive Advisors. “A fractional CIO thinks beyond technical needs and considers the needs of the organization from a strategic business perspective.” ... Beyond expertise and management skills, a fractional CIO can provide an independent point of view to enterprise leadership in critical areas, such as emerging technologies and IT security, as well as updating or building a technology roadmap. “This can help key decision-making and can sometimes segue into providing more hands-on help in executing on the chosen path,” says Amelia Tyagi, co-founder and CEO of Business Talent Group. In some cases, an enterprise may turn to a fractional CIO to serve as an interim executive, assuming the leadership role for a fixed period of time. “Fractional and interim CIOs are particularly effective solutions for companies that are undergoing rapid change, or those that have an unexpected leadership gap” Tyagi explains.


AI Heightens Cyber Risk for Legacy Weapon Systems

Artificial intelligence's nascent centrality to offensive weapons development means the United States should take bold steps to ensure that adversaries are unable to develop their models, said Rand Corp. CEO Jason Matheny. "These AI models right now are very brittle," Matheny said. "We need to be thinking about ways that we can slow down progress elsewhere by doing things like adversarial attacks, data poisoning and model inversion. Let's use the tricks that we're seeing used against us and make sure that we understand the state of the art." Data poisoning - in which adversaries alter the data used to train AI models in order to distort the resulting algorithms - is already a risk for the United States, said Shift5 co-founder and CEO Josh Lospinoso. "These are real problems," he said. "We need to think clearly about shoring up those security vulnerabilities in our AI algorithms before we deploy these broadly and have to clean the mess up afterwards."


TUC says government is failing to protect workers from AI harms

While AI-powered workplace surveillance offers greater control to organisations over worker behaviour, Pakes said the increasing datafication of employees is also a “profit centre” for employers, which can sell the data on to third parties. “Not all of us, but many of us, can take our work just about anywhere now with technology, but it also means our work and our bosses can follow us just about everywhere, into our private lives, into our homes,” he said, adding that AI-powered surveillance is no longer restricted to the “canary in the coal mine” of logistics and warehouse workers. “It doesn’t matter if you’re blue collar or white collar, doesn’t matter if you’re in a factory, in the office or at home – this software can check us and track us and invade us, and we really need to talk about it.” Gina Neff, executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, said that as part of her research she has interviewed numerous economists who only offered a “collective shrug” when asked what they think the overall, long-term impact of AI will be on work.


ENISA’s Threat Landscape and the Effect of Ransomware

According to ENISA, cybersecurity threats continued to grow during the COVID-19 pandemic. The pandemic increased cybersecurity threats and attack surfaces. It also provided attackers opportunities to exploit the new normal, partly because of the growth in people’s online presence (e.g., social media), hybrid working models and the transition to more cloud-based solutions. The boom in the transportation industry’s courier, express and parcel (CEP) business was also a factor because, during the pandemic, CEP delivery services became a critical infrastructure. The acceleration in new artificial intelligence (AI) technology and advanced features (e.g., AI adaptability through machine learning [ML] and automated phishing email distributions) also spurred the growth of the cybersecurity threat. These cyberattacks become more mainstream, leading to more targeting of enterprises through home offices. In addition, state-backed or state-sponsored groups have taken advantage of the pandemic to conduct cyberespionage and implement COVID-19–related social engineering lures.


This new technology could blow away GPT-4 and everything like it

Known as Hyena, the technology is able to achieve equivalent accuracy on benchmark tests, such as question answering, while using a fraction of the computing power. In some instances, the Hyena code is able to handle amounts of text that make GPT-style technology simply run out of memory and fail. "Our promising results at the sub-billion parameter scale suggest that attention may not be all we need," write the authors. That remark refers to the title of a landmark AI report of 2017, 'Attention is all you need'. In that paper, Google scientist Ashish Vaswani and colleagues introduced the world to Google's Transformer AI program. The transformer became the basis for every one of the recent large language models. But the Transformer has a big flaw. It uses something called "attention," where the computer program takes the information in one group of symbols, such as words, and moves that information to a new group of symbols, such as the answer you see from ChatGPT, which is the output.


Skills-first hiring can increase talent pools by up to 20x

Generally, employers are looking for candidates with transferable in-demand skills such as leadership and specific technological abilities, Duke said. “In nearly all cases, employers will find that candidates always have skills that can be applied to a position, even if the candidate is coming from a vastly different industry. This increases the talent pool and makes it easier for employers to find good candidates,” she said. As an example of how the skills-first approach works, Duke said that when employers looking to hire digital marketing managers use this method, the available talent pool increases by almost 22x. “That’s because many of the skills associated with this job are common across other jobs and industries,” she explained. “In this case, about 30 separate job titles across the U.S. have relevant skills for this job, but most companies would overlook those candidates.” Every role at an organization can be broken down into a set of skills needed to do the job well. Every person has a set of skills, whether they’re an existing employee or part of an external talent pool, Duke said. 


Data privacy implications of ChatGPT

Although automated decision-making can be useful for organizations, there are serious concerns and risks to individuals subject to such processes, such as adverse legal effects based on processes they may not understand or that may be exacerbating and replicating biases and discriminatory practices. For example, the American Civil Liberties Union has opined that “AI is built by humans and deployed in systems and institutions that have been marked by entrenched discrimination . . . bias is in the data used to train the AI . . . and can rear its head throughout the AI’s design, development, implementation, and use.” Similar concerns were raised in a 2022 Constangy webinar on AI featuring Commissioner Keith Sonderling of the Equal Employment Opportunity Commission. Further, the Italian data protection authority is investigating additional data privacy implications of ChatGPT, such as whether it can comply with the GDPR, its legal basis for processing, collecting, and storing mass amounts of personal data, and its lack of age verification tools. In the meantime, Italy has temporarily banned ChatGPT.


Why is ETL Dying?

Traditional ETL pipelines have faced difficulties in supporting the agility required by modern analytics use cases, leaving business users waiting in line for their desired results. As a result, ETL pipelines are often viewed as a hindrance to better performance and businesses must carefully assess their current role and explore how they can be optimally leveraged in the contemporary analytics landscape. Traditional ETL processes require moving large amounts of data across various stages and systems, making them slow, demanding on resources, and prone to errors. This can be challenging for modern data-driven businesses to manage, as traditional ETL tools often come with a high price tag and demand substantial investments in hardware, software, and personnel resources. In contrast, newer data platforms present pre-built services and extensions that can lessen these expenses and enable enterprises to concentrate on providing meaningful outcomes to their users. For example, Google Data Stream is an instance of this approach, which is capable of managing real-time CDC with minimal coding or setup.


These medical IoT devices carry the biggest security risks

"Advances in technology are essential to improve the speed and quality of care delivery as the industry is challenged with a shortage of care providers, but with increasingly connected care comes a bigger attack surface," said Mohammad Waqas, Armis' principal solutions architect for healthcare. "Protecting every type of connected device, medical, IoT, even the building management systems, with full visibility and continuous contextualised monitoring is a key element to ensuring patient safety." The prevalence of unprotected devices comes as the healthcare sector continues to face fresh cybersecurity risks. The sector saw a 31% climb in threat activities between January and March this year compared to the previous quarter, according to Armis, citing figures from its intelligence platform. Other evidence suggests the healthcare sector is increasingly reliant on connected devices. ... Singapore's Cyber Security Agency (CSA) has also warned that critical IoT devices are potential targets in ransomware attacks, with cyber criminals recognising that the infection of these devices could lead to significant downtime costs and damage.


IBM takes a pragmatic approach to enterprise AI

IBM has integrated AI with its mainframes. The newest z16 Big Iron boasts an AI accelerator built onto its core Telum processor that can do 300 billion deep-learning inferences per day with one millisecond latency, according to IBM. The latest version of its z/OS operating system will include a new AI Framework for system operations to optimize IT processes, simplify management, improve performance, and reduce skill requirements. The new version will also support technologies to deploy AI workloads co-located with z/OS applications and will feature improved cloud capabilities. IBM said AI-powered workload management will intelligently predict upcoming workloads and react by allocating an appropriate number of batch runs, thus eliminating manual fine-tuning and trial-and-error approaches. “Systems are getting more and more complex, so we want to simply operations through with AI and automation by bringing a very prescriptive solution to our clients that will give them value out of the box and then much more,” Chopra said.



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith

Daily Tech Digest - April 19, 2023

Why Your Current Job May Be Holding Back Your IT Career

Failing to pursue professional development opportunities and not maintaining a current and relevant skillset are both great ways to shift a career into neutral. “This includes not keeping up with the latest industry trends and technologies, not networking with other professionals, and not pursuing additional training or education opportunities,” Delfine says. “IT professionals need to continually develop their skillsets and be aware of and learn new methods and tools that can be applied across multiple industries.” Another mistake is spending too little or too much time in a particular role. Knowing when to stay and when to move on is a skill within itself, says Erin Goheen, vice president of technology at freight and logistics services firm XPO. “I've seen cases where job-hopping can be detrimental to one's career because it prohibits technologists from maximizing the amount of learning and skill development gained in a particular role,” she explains. “Conversely, if you’re in a role for too long and you're no longer learning and expanding your professional capabilities, other professionals who are actively growing in similar roles will pass you in their career trajectories.”


Top risks and best practices for securely offboarding employees

Shadow IT and information systems that aren’t part of a business’s identity and access management (IAM) architecture are a huge risk to successful, secure offboarding, says Richard Jones, global CISO at Orange Cyberdefense. This is magnified for cloud and SaaS systems/applications that don’t require specific network access or physical presence in an office, with IT teams often unaware of the extent of employees’ SaaS usage. ... Another challenge is managing software asset licenses. If employees aren’t properly offboarded from cloud system licenses this can lead to excessive IT costs as well as security risks, as licenses are often changed per user, per month, Jones says. It’s not just the risks of outgoing employees themselves that CISOs need to consider. “In most cases, mass layoffs cause remaining employees to be concerned about their job security, which can increase insider threats and introduce security gaps caused by unintentional negligence,” says Mohan Koo, CTO at DTEX Systems.


How Cybersecurity Leaders Can Capitalize on Cloud and Data Governance Synergy

In today’s modern organizations, explosive amounts of digital information are being used to drive business decisions and activities. However, both organizations and individuals may not have the necessary tools and resources to effectively carry out data governance at a large scale. I’ve experienced this scenario in both large private and public sector organizations: trying to wrangle data in complex environments with multiple stakeholders, systems, and settings. It often leads to incomplete inventories of systems and their data, along with who has access to it and why. Cloud-native services, automation, and innovation enable organizations to address these challenges as part of their broader data governance strategies and under the auspices of cloud governance and security. Many IaaS hyperscale cloud service providers offer native services to enable activities such as data loss protection (DLP). For example, AWS Macie automates the discovery of sensitive data, provides cost-efficient visibility, and helps mitigate the threats of unauthorized data access and exfiltration.


Seven Tips for Achieving Dynamic Professional Transformation with Framework Modeling

Framework modeling can be a significant differentiator and can empower professionals with rich knowledge repositories of best practices derived from frameworks. The modeling of the framework offers a big-picture approach and life cycle perspective for achieving goals. This can aid professionals as existing and emerging technologies impact which professional skills are relevant and required in the market. Innovative technologies continue to emerge and create an impact on employment due to new services made possible through innovation and automation. For example, there is much speculation about how ChatGPT will impact employment opportunities in various lines of work. There is also widespread concern that management will prefer to harness technology rather than employees when considering value delivery in the future. Hence, professionals as knowledge workers can benefit by upgrading their skills by adapting the framework modeling approach. ...  Framework modeling can be considered the skill of carving the required knowledge from the structure and contents of a framework per an enterprise’s needs.


FBI and FCC warn about “Juicejacking” – but just how useful is their advice?

The idea is simple: people on the road, especially at airports, where their own phone charger is either squashed away deep in their carry-on luggage and too troublesome to extract, or packed into the cargo hold of a plane where it can’t be accessed, often get struck by charge anxiety. Phone charge anxiety, which first became a thing in the 1990s and 2000s, is the equivalent of electric vehicle range anxiety today, where you can’t resist plugging in for a bit more juice right now, even if you’ve only got a few minutes to spare, in case you hit a snag later on in your journey. But phones charge over USB cables, which are specifically designed so they can carry both power and data. So, if you plug your phone into a USB outlet that’s provided by someone else, how can you be sure that it’s only providing charging power, and not secretly trying to negotiate a data connection with your device at the same time? What’s if there’s a computer at the other end that’s not only supplying 5 volts DC, but also sneakily trying to interact with your phone behind your back?


7 keys to controlling serverless cloud costs

Overprovisioning memory and CPU allocation are two culprits often found behind serverless computing cost overruns. When you execute a serverless function in your cloud application, your CSP allocates resources according to the function’s configuration. Then when billing time comes around, your CSP bases your billing on the amount of resources your application consumes. It makes good business sense to spend the extra time during the design phase to determine the appropriate amount of resources that each serverless function requires, so you’re minimizing costs. Train your cloud developers to use compute only when necessary, advises CloudZero. They give the example of using step functions to call APIs instead of Lambda functions, meaning you only pay for the step functions. The major CSPs and cloud management platforms include key performance indicator (KPI) monitoring dashboards of one form or another. You can also use observability tools, such as Datadog, for KPI monitoring. Monitoring your serverless KPIs should figure prominently in your project and deployment plans.


New DDoS attacks on Israel’s enterprises, infrastructure should be a wake-up call

“Generally speaking, all these attacks happen with more or less sophisticated forms, either abusing different vulnerabilities and systems or brute force DDoS,” Izrael said. “What’s different about these is that an unsophisticated DDoS tactic would be to blast a website with traffic and take it down. What’s happening here is that attackers have been targeting a lot of weak spots where they are taking down services.” Izrael added that the attackers have also managed to hobble, albeit briefly, smart IoT functionality at individual homes, buildings and other structures. Justin Cappos, professor of computer science and engineering at the NYU Tandon School of Engineering, said network provisioning operators need to pay attention to any new group launching large-scale DDoS attacks. ... Izrael said the combination of direct attacks by the Iranian government and indirect attacks by affiliated groups achieves two goals: keeping the provenance of the attacks very murky and making the attack seem bigger because the origin of the attacks is unclear. 


Rising to the challenge: the role of boards in effective bank governance

Effective governance has been a priority of our supervision for several years, and will continue to be in the years to come. As part of our work on this priority, we are carrying out an update of our supervisory expectations on governance. Today’s seminar is an important opportunity to listen to the industry as we fine-tune those expectations, and marks one of many milestones along the way. Particularly in the current climate, it is essential for banks to have strong and effective governance. A bank needs a board that can steer it through calm and stormy waters alike, setting the compass on the strategy for the bank, while ensuring a sustainable business model and monitoring risks in a forward-looking manner. In today’s environment, backward-looking indicators of risk might be misleading. It is therefore more important than ever for boards to be vigilant. Boards need to take a proactive approach to identifying emerging risks and trends, assessing potential impacts on the bank, and taking appropriate actions to mitigate them.


Unlocking the power of a multigenerational workforce

Those organisations that don’t innovate die a slow death; those who are not open to change and not forward-looking will not be far behind. Organisations have to constantly employ different ‘listening methods’ to gauge the pulse of employees across generations, check on new trends and keep revisiting their programs and policies to imbibe what’s new, instead of sticking to the ‘tried and tested’. ... Learning only happens when one’s thoughts and opinions are challenged by those people from entirely different backgrounds or have a very different thought process from that of one’s own. The influx of talent from diverse groups, especially from across generations hence continues being very essential for the organisation. The early-age talent brings enthusiasm and challenge; the older age group folks infuse much-needed wisdom and experience! Sensitising managers and leaders: Since they hold the staff for taking the organisation ahead, especially in turbulent times. ‘How to lead a team with members across generations’ is a learning module that organisations must learn to invest in – incorporating elements like empathy, situational leadership and leaving one’s ego behind.


CIO Fletcher Previn on designing the future of work

The network that can properly support hybrid work needs to be more distributed, porous and has a very different attack surface than when we were all in the office. Technologies like Zero Trust become even more important, along with split tunnel VPNs and having the right endpoint security strategy so you don’t have to backhaul all the traffic in order to inspect it. You need carrier and path diversity at your carrier neutral facilities and network points of presence, and you want to have a good peering strategy so you can bring applications closer to the end users and take traffic off the public internet. Full-stack observability becomes more urgent in a hybrid world. How do we really understand our employee experience our employees are having when they are connecting from across all sorts of networks that we don’t manage? We need to understand the performance of the public internet and various SaaS tools in order to really know what our hybrid work experience is going to be for our people. We also need tools that provide valuable observability that lets us detect and fix problems before our employees even know there is an issue brewing.



Quote for the day:

"Leadership should be born out of the understanding of the needs of those who would be affected by it. " -- Marian Anderson

Daily Tech Digest - April 17, 2023

The Power Of Silence: 10 Reasons Silent People Are Successful

Being silent often goes hand-in-hand with improved observation. When you’re not focused on expressing your thoughts, you have the more mental bandwidth to take in your surroundings. This heightened awareness allows you to understand people and situations better. Many successful individuals credit their observation skills as contributing to their achievements. By carefully observing their environment, they can identify opportunities and threats others might overlook. A quiet mind leads to better focus and concentration. When you’re silent, it’s easier to direct your attention to the task, free from distractions or competing thoughts. This improved focus can enhance your decision-making abilities and boost your overall productivity. ... Silence can be a powerful tool for emotional regulation. Silent individuals often excel at managing their emotions, avoiding impulsive actions, and maintaining composure in challenging situations. Staying calm under pressure can lead to better decision-making and increased resilience. 


7 cybersecurity mindsets that undermine practitioners and how to avoid them

Security is often seen as a standalone function or additional product that is bolted onto the real infrastructure or as a discrete thing to be finalized and delivered. This is a long-standing view in software development, something similar to the way we once thought about quality: as a distinct, separate component of things. “Quality is not an act, it’s a habit,” according to an elegant paraphrase of Aristotle. Just like quality, security is not a finished product but rather an ongoing discipline. When we see security as a practice, to be continually refined and honed, it frees up the energy to engage it as such. We grow healthier by exercising regularly and monitoring our diet daily; such is security. If we want to get good at guitar or a martial art, we must keep coming back to it and refining it, but there is always more to develop — just as in security. Instead of bemoaning this fact, we can lean into it and use it to fuel our efforts. It’s actually a blessing to work in a field that always has room for growth and can fully engage our capabilities. 


A distributed database load-balancing architecture with ShardingSphere

The key point of ShardingSphere-Proxy cluster load balancing is that the database protocol itself is designed to be stateful (connection authentication status, transaction status, Prepared Statement, and so on). If the load balancing on top of the ShardingSphere-Proxy cannot understand the database protocol, your only option is to select a four-tier load balancing proxy ShardingSphere-Proxy cluster. In this case, a specific proxy instance maintains the state of the database connection between the client and ShardingSphere-Proxy. Because the proxy instance maintains the connection state, four-tier load balancing can only achieve connection-level load balancing. Multiple requests for the same database connection cannot be polled to multiple proxy instances. Request-level load balancing is not possible. ... Theoretically, there is no functional difference between a client connecting directly to a single ShardingSphere-Proxy or a ShardingSphere-Proxy cluster through a load-balancing portal. However, there are some differences in the technical implementation and configuration of the different load balancers.


How Synthetic Data Can Help Train AI and Maintain Privacy

Common use cases for synthetic data include software engineering when new features are built but no production data is available, says Jim Scheibmeir, senior director analyst with Gartner. For instance, if software is tested for an autonomous vehicle, and it needs new information about the weather or obstructions in the road, he says. Different scenarios can be generated to test that autonomous algorithm to prepare it. Data scientists who are trying to create new algorithms, Scheibmeir says, or need to prove out new hypotheses might struggle to get their hands on production data. That limited availability might have to do with restricted access, compliance, or regulation, making synthetic data attractive. The rise of generative AI might also play a role in synthetic data generation. “Certainly, ChatGPT is going to reinvigorate our imagination of what generative can do for us,” Scheibmeir says. “Gartner urges organizations to look at proper test data management, including synthetic data generation, for a few different reasons.” 


What business executives don’t understand about IT

When the CEO doesn’t think IT is important enough to get top-level attention, that message filters down to the rest of the corporation. IT is not viewed to be as important as Sales, Finance, Manufacturing, Operations, or Marketing — dangerous in a highly competitive environment where efficient or innovative systems can spell the difference between the corporation’s success or failure. ... Systems development is another important area executives need to understand. The systems IT develops will not be used by IT; rather, they will become integral to the requesting department. It is important, therefore, that management understand the processes involved in proposing the system, estimating the cost, determining the ROI, producing the deliverables, changing the specifications and time frames, and measuring the system effectiveness. After all, the completed system may impact sales projections, departmental costs, and individual incentives, to name a few. Management must also assure that the people in the user organization are given the time and recognition to do the work required to develop the precise specifications of the system. 


Tech companies including Adobe are taking a new look at a big industry debt issue

Despite the drag of technical debt that the data suggests, some industry executives say it gets a bad reputation. “If you’re tech-debt-free, you’re not innovating,” said Frans Xavier, CTO of low-code/no-code security automation platform Swimlane. In this sense, technical debt is a signal of iteration. In fact, in a recent report from consumer electronics company TE Connectivity, 55% of the engineers surveyed said it’s iteration — not total transformation — that represents innovation at its core. Adobe head of strategic development for creative cloud partnerships Chris Duffey is looking to reshape technical debt. “I would offer to reframe technical debt as the value of insight gathering throughout the innovation creation process,” Duffey said. The “fail fast” dogma that propels much of the technology industry (when not taken literally) references experimentation, insight gathering, and optimization, he added. This can be hard to see when you look solely at the data, in part because it’s difficult to quantify the process of innovation. 


Moving beyond DEI: Fostering belongingness in the workplace

Measuring belongingness is different than simply measuring diversity and inclusion. Diversity and inclusion are behaviours, meaning they can be mostly measured through policy and procedures. On the other hand, belongingness is an emotional response that covers an array of factors such as an individual’s trust, comfortability, and openness towards the company. Therefore, belongingness happens when the employee is ‘valued’, and value here means not only are they acknowledged and appreciated for their work, but they also understand how their work contributes to the company’s vision, mission, key priorities, and growth. It also means that ‘they matter’ – being part of the teams, on ‘top of mind’ for leading and driving the initiatives, being ‘trusted’ and ‘cared for’, and that is the ultimate cement that joins them to the culture of the company. Belongingness is in these little things that define “moments that matter” – let's explain that in greater detail with questions that come to an employee’s mind when they experience an organization.


Enhance data governance with distributed data stewardship

Data stewards are a central point of contact. They enforce accountability of the data lifecycle, and oversee data governance and visibility. In many instances, data stewardship is a centralized business or IT function. These settings require enterprise data governance or expertise in data management and governance execution. Distributed data stewardship is a model or framework that allows teams closest to the data to manage access and permissions. Data management is decentralized and resides within the business unit. ... The core component of a distributed data stewardship program is similar to a data stewardship one. The success of such a model depends on how well a decentralized IT, governance and distributed access management model works. Because a distributed data stewardship model delegates data management responsibilities throughout the enterprise, the fundamental difference between a data stewardship model and a distributed data stewardship model is in shifting an organization toward decentralizing data access. This requires time, effort, cadence and key stakeholders who agree and adhere to such a framework.


Cognitive flexibility: the science of how to be successful in business and at work

Cognitive flexibility aids learning under uncertainty and to negotiating complex situations. This is not merely about changing your decisions. Higher cognitive flexibility involves rapidly realising when a strategy is failing and changing strategies. The importance of cognitive flexibility was first discovered in clinical patients. The function engages areas of the brain involved with decision making, including the prefrontal cortex and striatal circuitry. When this circuitry becomes dysfunctional due to neurological diseases or psychiatric disorders, it can cause rigidity of thought and a failure to adapt. Cognitive flexibility is required in many real-world situations. The category of workers that requires the highest level of adaptability is arguably entrepreneurs. Entrepreneurs need to show flexibility not only in terms of idea generation, but also for resource allocation and social exchanges. Indeed, our previous research has shown that entrepreneurs, compared with high-level managers, have increased cognitive flexibility. This ultimately helps them to solve problems and make risky decisions successfully.


IT leadership: Mission-driven IT and finding your "why"

People talk about IT strategy or tech strategy or product strategy; they talk about deliverables, roadmaps, all of that stuff. To me, it all starts with the mission, and our mission is to transform lives by unlocking better evidence. And really what that means day to day is helping facilitate and support and enable the clinical trial process, which we know in recent years especially has—the importance of which is really second to none. It’s accelerated during the pandemic, naturally, as we look for treatments and preventatives for Covid. But now, what it’s done is it’s poured gas on the fire in a whole bunch of other areas, too. So the industry is working faster than ever, and I like to think we’re doing life-changing work. I believe we are. And the technology that we build at Clario and the expertise that we bring helps support the companies that are running clinical trials, the sponsors, the people who are running trials day to day, the sponsor—or the trial teams, as well as the sites. You know, the folks, the nurses, the clinicians, the physicians who are all part of this process and helping facilitate this.



Quote for the day:

"Increasingly, management_s role is not to organize work, but to direct passion and purpose." -- Greg Satell