Daily Tech Digest - April 24, 2023

Is Strategic Thinking Dead? 5 Ways To Think Better For An Uncertain Future

Strategic thinking is distinguished from tactical thinking because it takes a longer view rather than reacting to events as they happen. It pushes you to be proactive in your actions, rather than reactive. And even in the addressing the immediate, strategic thinking can actually increase your effectiveness—because your advanced planning will have given you the opportunity to explore potential situations, assess responses and judge outcomes—and these can prepare you for how you react when you have less runway. ... One of the hallmarks of a strategic thinker is clarity of purpose. Be sure you’re clear about where you want to go—as an individual, a team or a business. Know your true north because it will help you choose wisely among multiple options. The language you choose to describe where you want to be (or how you understand a challenge) will constrain or create possibilities, so also be careful about how you describe your intentions. If your purpose is to unleash human potential for students, that will likely take you farther than a goal to simply provide great classroom experiences. 


Why Backing up SaaS Is Necessary

Looking at the possibilities to protect their data on those SaaS-platforms, organisations started to quickly realise that their SaaS solutions were not as protected as their other applications run in their own datacentre or their private cloud. Companies that did know that fact had to put up with it as the product forced them to use it as it was. Users had to learn the hard way that most SaaS solutions have a shared responsibility model where the customer is responsible for his or her own data. ... Even more critically, it’s important to ensure backups are stored in an independent cloud dedicated to data protection and not dependent on one of the large hyperscalers. A third-party cloud gives total control over backed up data and can easily ensure three to four copies are always made and reside in multiple locations. By retaining SaaS data in an independent backup-focused cloud, customers can also avoid the egress charges that come part and parcel with the public cloud. These extra charges often result in surprise bills after data restores and make it difficult to budget.


7 steps to take before developing digital twins

Leaders in any emerging technology area look for stories to inspire adoption. Some should be inspirational and help illustrate the art of the possible, while others must be pragmatic and demonstrate business outcomes to entice supporters. If your business’s direct competitors have successfully deployed digital twins, highlighting their use cases often creates a sense of urgency. ... Harry Powell, head of industry solutions at TigerGraph, says, “When creating a digital twin of a moderately sized organization, you will need millions of data points and relationships. To query that data, it will require traversing or hopping across dozens of links to understand the relationships between thousands of objects.” Many data management platforms support real-time analytics and large-scale machine learning models. But digital twins used to simulate the behavior across thousands or more entities, such as manufacturing components or smart buildings, will need a data model that enables querying on entities and their relationships


Enterprise Architecture Management (EAM) in digital transformation

The point is to accompany these “things” throughout their entire life cycle on the basis of a coherent technology vision, to recognise innovation potential, to identify technology risks, to derive a technology strategy. And often EAM already fails because of this corporate language, because the mostly abstract orders, including abstract or economic business language, come directly from the board and “have to be implemented”. There is usually no budgeting, because “everyone has to participate”. This is the reality, and EAM is ground between the board and development and operations. ... What could be a benefit of EAM? You always have to think about this question in the context of your own company! A TOGAF copy of the EAM goals or principles is not helpful, e.g. “The primary goal of EAM is cost reduction”. Has never worked. Yes, it may be that costs can be reduced. But EAM always brings more quality, and the cost savings are not accounting, they always go straight into new methods or procedures: a better overview of the applications enables projects to start faster, the time gained and the less effort is immediately put into sensible other efforts.


Online Safety Bill could pose risk to encryption technology used by Ukraine

The Online Safety Bill will give the regulator, Ofcom, powers to require communications companies to install technology, known as client-side scanning (CSS), to analyse the content of messages for child sexual abuse and terrorism content before they are encrypted. The Home Office maintains that client-side scanning, which uses software installed on a user’s phone or computer, is able to maintain communications privacy while policing messages for criminal content. But Hodgson told Computer Weekly that Element would have no choice but to withdraw its encrypted mobile phone communications app from the UK if the Online Safety Bill passed into law in its current form. Element supplies encrypted communications to governments, including the UK, France, Germany, Sweden and Ukraine. “There is no way on Earth that any of our customers would every consider that setup [client-side scanning], so obviously we wouldn’t put that into the enterprise product,” he said. “But it would also mean that we wouldn’t be able to supply a consumer secure messaging app in the UK. ...” he added.


The biggest data security blind spot: Authorization

When authorization is overlooked, companies have little to no visibility into who is accessing what. This makes it challenging to track access, identify unusual behavior, or detect potential threats. It also leads to having “overprivileged” users – a leading cause of data breaches according to many industry reports. Authorization oversight is critical when employees leave a company or change roles within the organization, as they might retain access to sensitive data they no longer need. If access rights never expire, unauthorized users have access to sensitive data. And with layoffs, the risk of data theft increases. The lack of proper authorization also puts companies at risk of non-compliance with privacy laws like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), which can result in significant penalties and reputational damage. Most organizations store sensitive data in the cloud, and the majority do so without any kind of encryption, making proper authorization all the more necessary.


AI can write your emails, reports, and essays. But can it express your emotions? Should it?

What do we lose when we outsource expressing our emotions to an AI chatbot? We've all heard that sitting on our emotions and feeling them is how we process them and get the intensity to pass. Speaking from the heart about a complex, heavy topic is one way we can feel true catharsis. AI can't do that processing for us. There's a common theme during periods of technological innovation that technology is supposed to do the mundane, annoying, dangerous, or insufferable tasks that humans hate doing. Many of us would sometimes prefer to avoid emotional processing. But experiencing complex emotions is what makes us human. And it's one of the few things an AI model as advanced as ChatGPT can't do. If you think of expressing emotions as less of an experience and more of a task, it might seem clever to automate them. But you can't conquer human emotions by passing the unsavory parts of them to a language model. Emotions are critical to the human experience, and denying them their place within yourself can lead to unhealthy coping mechanisms and poor physical health.


Benefits of data mesh might not be worth the cost

Data mesh might be a good framework for businesses that acquire companies but don't consolidate with them, thus wanting a decentralized approach to most or even all of the individual companies' data, Thanaraj said. It might also be a good option for large organizations that operate in multiple countries. These organizations' leaders might want to -- and are sometimes required to -- maintain local data autonomy. "That's where I see data mesh being a much more appropriate data architecture to apply," Thanaraj said. Still, questions remain about the long-term value of data mesh. In fact, Gartner labeled data mesh as "obsolete before plateau" in its 2022 "Hype Cycle for Data Management." Moreover, organizations could more readily use other better-defined and more easily implemented approaches to improve their data programs, Aiken said. Organizations have DataOps, existing data management frameworks and data governance practices at their disposal. If a data program doesn't follow best data management practices, data mesh won't improve it. "Those improvements could be achieved by other practices that don't have a buzz around them like data mesh," he said.


Do the productivity gains from generative AI outweigh the security risks?

In short, using generative AI to code is dangerous, but its efficiencies are so great that it will be extremely tempting for corporate executives to use it anyway. Bratin Saha, vice president for AI and ML Services at AWS, argues the decision doesn’t have to be one or the other. How so? Saha maintains that the efficiency benefits of coding with generative AI are so sky-high that there will be plenty of dollars in the budget for post-development repairs. That could mean enough dollars to pay for extensive security and functionality testing in a sandbox — both with automated software and expensive human talent — and the very attractive spreadsheet ROI. Software development can be executed 57% more efficiently with generative AI — at least the AWS flavor — but that efficiency gets even better if it replaces les experienced coders, Saha said in a Computerworld interview. “We have trained it on lots of high-quality code, but the efficiency depends on the task you are doing and the proficiency level,” Saha said, adding that a coder “who has just started programming won’t know the libraries and the coding.”


The staying power of shadow IT, and how to combat risks related to it

The problem, when it comes to uncovering shadow IT, is that information about what applications exist and who has access to them is spread across a company, in many different silos. It lives in the files of sometimes hundreds of business application owners – end-users in marketing, sales, customer service, finance, HR, product development, legal and other departments who acquired the applications. How do most organizations go about finding this data? They send emails, Microsoft Teams or Slack messages to employees asking them to notify IT if they have purchased or signed up for a free app, and who they’ve given access to (and hope everyone will respond). Then IT manually inputs any information they get into a spreadsheet. ... The data must be automatically and continuously collected and normalized. It must be made available to all SaaS management stakeholders, from the people who own and must therefore take responsibility for managing their apps, to IT leaders and admins, IT security teams, procurement managers, and more.



Quote for the day:

“Unless we are willing to go through that initial awkward stage of becoming a leader, we can’t grow.” -- Claudio Toyama

Daily Tech Digest - April 23, 2023

Shadow IT, SaaS Pose Security Liability for Enterprises

All issues surrounding shadow IT can be traced back to an organization's lack of visibility. An unmanaged software stack gives IT teams zero insight into how sensitive company information is being used and distributed. Since these tools are not vetted properly and are left unmonitored, the data they store is not adequately protected by most organizations. This creates the perfect framework for hackers to easily seize important data, such as confidential financial records or personal details. Critical corporate data is at risk because most, if not all, SaaS tools require corporate credentials and access to an organization's internal network. A recent survey by Adaptive Shield and CSA actually shows that in the past year alone, 63% of CISOs have reported security incidents from this type of SaaS misuse. As stated prior, the recurring theme that many businesses are experiencing with shadow IT is the risk associated with a data breach. However, it is equally important to realize the potential industry scrutiny that businesses face and the penalties they receive from regulators because of sprawling shadow IT.


The Cyber Resilience Act Threatens Open Source

At the heart of the issue is the need for organizations to self-certify their compliance with the act. Since open source is often maintained by a small loose-knit group of contributors, it is difficult to see how this will work. Here’s the concern in a nutshell. Suppose you write up a cool little C++ program for your own use. You aren’t a company, and you didn’t do it for profit. Wanting to share your work, you post your program on GitHub with an open source license. ... In fact, it is even encouraged. That’s how open source works. The problem is when the GRID database has a problem that causes a data breach. The problem turns out to be a vulnerability in your code. Under the proposed law, it is possible you’d be left holding the bag for a large sum of money thanks to your generous hobby project that didn’t earn you a cent. The situation is even more complex if your code has multiple contributors. Was it your code that caused the breach or the other developer’s code? Who “owns” the project? Are all contributors liable? 


Why Your Personal Brand Needs A Niche: The Benefits Of Specialization

Finding your niche also allows you to focus your energy and resources on a specific area, reducing the chances of you feeling overwhelmed trying to be everything to everyone. A niche provides a compass for your efforts, ensuring that the work you do aligns with your skills and interests. While being more specific can feel uncomfortable, it ultimately enables employers and clients to understand the specific value you offer. In the early days of my consultancy, I found myself saying yes to everything, including some speaking engagements that fell outside of my immediate area of expertise or taking on clients who demanded a lot of additional effort on my part to cover the entire scope of the services they sought that went beyond my offerings. Over time, I defined clearer boundaries around my scope of services. I also tried to more explicitly communicate which services I did not offer or consider within my area of expertise. When you niche down and clearly define your area of focus, it enables you to make clearer career choices, only pursuing opportunities that allow you to reinforce your positioning.


Former Microsoft CIO Jim DuBois Dishes On AI and Future of IT

One of the things we have to figure out in the future of work is that a huge part of the population isn’t able to take advantage of this hybrid and remote opportunity. And what do we do for them? Do we end up getting to a place where people are picking jobs based on whether they can work remote or not? And are we going to have to compensate people differently for being on- or off site? That’s something that hasn’t been solved … There are a lot of companies that haven’t figured out how to keep the collaboration and the culture going in a remote workforce. So they just said, “Oh, we’ve got to get people back into the office do that.” I would say, “Or, you could figure out how to collaborate and keep your culture going with remote.” ... I’m a believer in carrot rather than stick incentives. Rather than compliance requirements, we need to focus on the fact that there’s so much value in ESG and in having a more diverse team. We need to focus more on the incentives and less on the “because we told you to” part. 


Using generative AI to understand customers

In terms of better understanding customers, generative AI is really effective in summarising information. Companies are already using the technology to create auto-summaries of market research reports, eliminating the need for having to precis reports manually. Going forward, there is potential to expand this use case to summarise large volumes of information quickly and efficiently in order to provide concise answers to key business questions. ... Generative AI can also make it easier for all stakeholders to access market research without having to involve an insights manager each time, thereby removing access barriers and facilitating the seamless integration of consumer insights into daily operations. Moreover, generative AI can help to address common concerns associated with all stakeholders accessing market research, such as non-research workers asking the wrong questions. By prompting relevant questions related to their search query, the technology can help those without research backgrounds to ask better questions, ultimately leading to more accurate and useful customer information.


Optimizing SaaS With Automation and Zero-Touch IT

While it may seem daunting, the journey to achieving zero-touch IT is not out of reach. It does require investment in time, technology and people, however. And once you get there the efficiencies will be apparent. Let’s break these benefits down by category. Zero-touch IT helps companies manage their software applications much more effectively. IT groups have historically gotten bogged down in the manual execution of tasks that are complicated and tedious, despite being basic and common. Two processes cited as top concerns for IT professionals, onboarding new employees and offboarding departing employees, are concrete examples. But managing the user life cycle of an employee doesn’t just start at onboarding and stop at offboarding. Many changes take place during an employee’s time at the organization—promotions, changes in departments, password resets, new project assignments, etc. And every single time an event like this occurs, some type of action, like giving or revoking access to new files, elevating access rights or taking security steps to prevent unauthorized access is required. 


Cyber insurer launches InsurSec solution to help SMBs improve security, risk management

InsurSec solutions are new, emerging offerings, but the concept behind them and its potential to add value to involved parties is something being recognized more widely, particularly for SMBs and organizations struggling with an adverse blend of low maturity and cost constraints. “I think the insurance market is recognizing that their future offering in this space has to grow beyond simple loss protection,” Paul Watts, distinguished analyst at the Information Security Forum, tells CSO. “Providing complementary services to help organizations with proactive and reactive management of cyber risk could also help foster stronger relationships between insurer and client.” Both parties stand to benefit here – by engaging in this way, risk is better (and jointly) managed, Watts says. Insurers are mitigating losses, and clients are drawing down on capabilities that were previously too expensive for consideration and could see lower premiums as a result. 


Novel Technique Exploits Kubernetes RBAC to Create Backdoors

Researchers at Cybersecurity firm Aqua Security said they recorded and analyzed an attack on its Kubernetes honeypots that used the RBAC system to gain persistence. Kubernetes Role-based access control or RBAC is a method of restricting network access based on the roles of individual users within an organization. In their honeypots, the researchers exposed AWS access keys in various locations on the cluster and received a beacon indicating that the access keys were used by the attacker to try and gain further access to the cloud service provider account and leverage the attack to steal more resources and data. "The findings are significant as they shed light on the risks of misconfigurations and how even large organizations can overlook the importance of securing their clusters, leaving them vulnerable to potential disasters with just one mistake," according to researchers. The large-scale campaign dubbed RBAC Buster allowed attackers to gain initial access by exploiting a misconfigured API server that allowed unauthenticated requests from anonymous users with privileges.


How does blockchain fit into today’s enterprise?

According to Bennett, outside of the financial services sector, “we are still not at the point where we can confidently say that blockchain really is delivering the business value that people are looking for, simply because it is incredibly difficult to actually set up a blockchain network that at the end of the day really needs all those blockchain features,” she said. Stack Overflow recently conducted a survey to find out what new technologies made it past what Gartner refers to as the hype cycle. Many new technologies can stir up excitement in the industry, but not all will actually see widespread adoption. They ranked technologies on a scale of experimental to proven and positive to negative impact. On a scale from zero (experimental) to 10 (proven), blockchain technology came in towards the middle at 4.8. And on a scale from zero (negative impact) to 10 (positive impact), it received a score of 5.3. Another survey by Foundry echoes these sentiments. It found that 51% of respondents were not interested in adopting blockchain technology within their organization.


Navigating The Future Of Cyber

Cyber is about more than protecting information—risk management, incident response planning and threat intelligence can often be directly correlated to increasing trust within businesses. Many organizations recognize the importance of prioritizing cybersecurity and have reported significant improvements in trust and efficiency through their efforts. In Deloitte Global’s latest Future of Cyber Survey, almost 70% of businesses that were identified as highly mature organizations when it comes to cyber believe cybersecurity has positively impacted their organization's reputation and productivity. From robust cyber planning across the business and effective board-level engagement—the high cyber performers recognize the importance of cyber responsibility and involvement across the whole organization. Beyond looking across the organization, cyber planning strategies should be regularly reviewed and updated to protect trust in the organization.



Quote for the day:

"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward

Daily Tech Digest - April 22, 2023

What CIOs need to become better enablers of sustainability

Key to this is a greater understanding of business operations and their production of CO2, or use of unsustainable practices and resources. As with most business challenges, data is instrumental. “Like anything, the hard work is the initial assessment,” says CGI director of business consulting and CIO advisor Sean Sadler. “From a technology perspective, you need to look at the infrastructure, where it’s applied, how much energy it draws, and then how it fits into the overall sustainability scheme.” CIOs who create data cultures across organizations enable not only sustainable business processes but also reduce reliance on consultancies, according to IDC. “Organizations with the most mature environmental, social, and governance (ESG) strategies are increasingly turning to software platforms to meet their data management and reporting needs,” says Amy Cravens, IDC research manager, ESG Reporting and Management Technologies. 


How to implement observability in your IT architecture

Although it has grown out of the APM market, observability is more than just APM with a new name and marketing approach. The most crucial factor differentiating observability from APM is that observability includes three distinct monitoring approaches—tracing, metrics, and logs—while APM provides tracing alone. By collecting and aggregating these various types of data from multiple sources, observability offers a much broader view of the overall system and application health and performance, with the ability to gain much deeper insights into potential performance issues. Another important distinction is that open source tools are the foundation of observability, but not APM. While some APM vendors have recently open-sourced the client side of their stack, the server side of all the popular commercial APM solutions is still proprietary. These distinctions do not mean that observability and APM are unconnected. Application performance management can still be an important component of an observability implementation.


How Conversational Programming Will Democratize Computing

The scope of a conversation must mirror a human “mental stack”, not that of a computer. When I use a conventional Windows interface on my laptop, I am confronted with the computer’s file system which is presented as folders and files. That effort is reversed in conversational programming — the LLM system has to work with my limited human cognition facilities. This means creating things in response to requests, and reporting outcomes at the same level that I asked for them. Returning arcane error codes in response to requests will immediately break the conversation. We have already seen ChatGPT reflect on its errors, which means a conversation should retain its value for the user. ... The industrialization of LLMs is the only thing we can be reasonably sure about, because the investment has already been made. However, the rapid advancement of GPT systems will likely run aground in the same areas that other large-scale projects have in the past. The lack of collaboration between large competitors has eroded countless good ideas that depended on interoperability.


Dark Side of DevOps - the Price of Shifting Left and Ways to Make it Affordable

On the one hand, not having a gatekeeper feels great. Developers don’t have to wait for somebody’s approval - they can iterate faster and write better code because their feedback loop is shorter, and it is easier to catch and fix bugs. On the other hand, the added cognitive load is measurable - all the tools and techniques that developers have to learn now require time and mental effort. Some developers don’t want that - they just want to concentrate on writing their own code, on solving business problems. ... However, as companies grow, so does the complexity of their IT infrastructure. Maintaining dozens of interconnected services is not a trivial task anymore. Even locating their respective owners is not so easy. At this point, companies face a choice - either reintroducing the gatekeeping practices that negatively affect productivity, or to provide a paved path - a set of predefined solutions that codifies the best practices, and takes away mental toil, allowing developers to concentrate on solving business problems.


Why generative AI will turbocharge low-code and no-code development

Generative AI's integration into low-code and no-code platforms will lower the barriers to adoption of these development environments in enterprises, agreed John Bratincevic, principal analyst at Forrester. “The integration of generative AI will see adoption of low-code by business users, since the learning curve for getting started on developing applications will be even lower,” Bratincevic said. The marriage of generative AI with low-code and no-code platforms will aid professional developers as well, analysts said. ... “These generative AI coding capabilities will be most helpful for developers working on larger projects that are looking for shortcuts to support commoditized or common sense requests,” said Hyoun Park, principal analyst at Amalgam Insights. “Rather than searching for the right library or getting stuck on trying to remember a specific command or term, GPT and other similar generative AI tools will be able to provide a sample of code that developers can then use, edit, and augment,” Park said.


Start with Sound Policies, Then Customize with Required Exceptions

Number one is our culture of security, not just within the cybersecurity organization, but broader than the cybersecurity organization looking at the entire Providence org – instilling security practices into our business practices, or business processes, instilling security mindset into our caregivers, because our caregivers truly are on the front lines of the cybersecurity battlefield. They’re the ones that are receiving phishing emails, they’re the ones that are making decisions on what they click on, what they don’t click on, interactions with our clinical device vendors, or clinical application vendors. They’re making risk choices every day. So informing them about security, training them on security, and instilling security culture – broader than just the security organization – has been a real focus of ours this year. Another focus of ours has been on implementing or continuing the journey, I should say, toward a zero trust approach here at Providence. And when I say zero trust, a lot of people use the term, “never trust, always verify.” 


Leap of Faith: Building Trust and Commitment with Engineering

Leaping before you’re ready will result in disappointment if not outright disaster. It is important to understand what knowledge and muscle is required along the various stages that lead to full Engineering Trust & Autonomy. Each organization must determine this trust criteria for themselves, however, it is imperative to recognize starting from the future end-state goal and working backwards promotes the greatest benefit (e.g., innovative inspirational differentiation). To be most effective, seek out Leading teams already doing this in your organization. They do exist, but they most likely are considered one-offs, rogue, and exceptions to the internal norm. Good. That’s what you’re looking for! ... Once trust criteria is shared and definitive trust-boundaries are in place, the hardest piece of this puzzle must be executed: Executive Leadership and Individual Commitment Putting your strategy into play takes time and during that time doubts will creep in. This is normal, however, there are a few tricks to leverage that ensure you stay the course


Used Routers Often Come Loaded With Corporate Secrets

The big danger is that the wealth of information on the devices would be valuable to cybercriminals and even state-backed hackers. Corporate application logins, network credentials, and encryption keys have high value on dark web markets and criminal forums. Attackers can also sell information about individuals for use in identity theft and other scamming. Details about how a corporate network operates and the digital structure of an organization are also extremely valuable, whether you're doing reconnaissance to launch a ransomware attack or plotting an espionage campaign. For example, routers may reveal that a particular organization is running outdated versions of applications or operating systems that contain exploitable vulnerabilities, essentially giving hackers a road map of possible attack strategies. ... Since secondhand equipment is discounted, it would potentially be feasible for cybercriminals to invest in purchasing used devices to mine them for information and network access and then use the information themselves or resell it.


ChatGPT may hinder the cybersecurity industry

ChatGPT’s AI technology is readily available to most of the world. Therefore, as with any other battle, it’s simply a race to see which side will make better use of the technology. Cybersecurity companies will need to continuously combat nefarious users who will figure out ways to use ChatGPT to cause harm in ways that cybersecurity businesses haven’t yet fathomed. And yet this fact hasn’t deterred investors, and the future of ChatGPT looks very bright. With Microsoft investing $10 billion in Open AI, it’s clear that ChatGPT’s knowledge and abilities will continue to expand. For future versions of this technology, software developers need to pay attention to its lack of safety measures, and the devil will be in the details. ChatGPT probably won’t be able to thwart this problem to a large degree. It can have mechanisms in place to evaluate users’ habits and home in on individuals who use obvious prompts like, “write me a phishing email as if I’m someone’s boss,” or try to validate individuals’ identities. Open AI could even work with researchers to train its datasets to evaluate when their text has been used in attacks elsewhere.


A New Era of Natural Language Search Emerges for the Enterprise

Due to the statistical nature of their underlying technology, chatbots can hallucinate incorrect information, as they do not actually understand the language but are simply predicting the next best word. Often, the training data is so broad that explaining how a chatbot arrived at the answer it gave is nearly impossible. This “black box” approach to AI with its lack of explainability simply will not fly for many enterprise use cases. Welsh gives the example of a pharmaceutical company that is delivering answers to a healthcare provider or a patient who visits its drug website. The company is required to know and explain each search result that could be given to those asking questions. So, despite the recent spike in demand for systems like ChatGPT, adapting them for these stringent enterprise requirements is not an easy task, and this demand is often unmet, according to Welsh. ... Welsh predicts the companies that will win during this new era of the enterprise search space are those that had the foresight to have a product on the market now, and though the competition is currently heating up, some of these newer companies are already behind the curve. 



Quote for the day:

"Leaders must be good listeners. It's rule number one, and it's the most powerful thing they can do to build trusted relationships." -- Lee Ellis

Daily Tech Digest - April 21, 2023

A team of ex-Apple employees wants to replace smartphones with this AI projector

It's a seamless blend of technology and human interaction that Humane believes can extend to daily schedule run-downs, seeing map directions, and receiving visual aids for cooking or when fixing a car engine -- as suggested by the company's public patents. The list goes on. Chaudhri also demoed the wearable's voice translator which converted his English into French while using an AI-generated voice to retain his tone and timbre, as reported by designer Michael Mofina, who watched the recorded TED Talk before it was taken down. Mofina also shared an instance when the wearable was able to recap the user's missed notifications without sounding invasive, framing them as, "You got an email, and Bethany sent you some photos." Perhaps the biggest draw to Humane and its AI projector is the team behind it. That roster includes Chaudri, a former Director of Design at Apple who worked on the Mac, iPod, iPhone, and other prominent devices, and Bethany Bongiorno, also from Apple and was heavily involved in the software management of iOS and MacOS.


Three issues with generative AI still need to be solved

Generative AI uses massive language models, it’s processor-intensive, and it’s rapidly becoming as ubiquitous as browsers. This is a problem because existing, centralized datacenters aren’t structured to handle this kind of load. They are I/O-constrained, processor-constrained, database-constrained, cost-constrained, and size-constrained, making a massive increase in centralized capacity unlikely in the near term, even though the need for this capacity is going vertical. These capacity problems will increase latency, reduce reliability, and over time could throttle performance and reduce customer satisfaction with the result. The need is for more of a more hybrid approach where the AI components necessary for speed are retained locally (on devices) while the majority of the data resides centrally to reduce datacenter loads and decrease latency. Without a hybrid solution — where smartphones and laptops can do much of the work — use of the technology is likely to stall as satisfaction falls, particularly in areas such as gaming, translation, and conversations where latency will be most annoying.


Exploring The Incredible Capabilities Of Auto-GPT

The first notable application is code improvement. Auto-GPT can read, write and execute code and thus can improve its own programming. The AI can evaluate, test and update code to make it faster, more reliable, and more efficient. In a recent tweet, Auto-GPT’s developer, Significant Gravitas, shared a video of the tool checking a simple example function responsible for math calculations. While this particular example only contained a simple syntax error, it still took the AI roughly a minute to correct the mistake, which would have taken a human much longer in a codebase containing hundreds or thousands of lines. ... The second notable application is in building an app. Auto-GPT detected that Varun Mayya needed the Node.js runtime environment to build an app, which was missing on his computer. Auto-GPT searched for installation instructions, downloaded and extracted the archive, and then started a Node server to continue with the job. While Auto-GPT made the installation process effortless, Mayya cautions against using AI for coding unless you already understand programming, as it can still make errors.


The Best (and Worst) Reasons to Adopt OpenTelemetry

Gathering telemetry data can be a challenge, and with OpenTelemetry now handling essential signals like metrics, traces and logs, you might feel the urge to save your company some cash by building your own system. As a developer myself, I totally get that feeling, but I also know how easy it is to underestimate the effort involved by just focusing on the fun parts when kicking off the project. No joke, I’ve actually seen organizations assign teams of 50 engineers to work on their observability stack, even though the company’s core business is something else entirely. Keep in mind that data collection is just a small part of what observability tools do these days. The real challenge lies in data ingestion, retention, storage and, ultimately, delivering valuable insights from your data at scale. ... At the very least, auto-instrumentation will search for recognized libraries and APIs and then add some code to indicate the start and end of well-known function calls. Additionally, auto-instrumentation takes care of capturing the current context from incoming requests and forwarding it to downstream requests.


OpenAI’s hunger for data is coming back to bite it

The Italian authority says OpenAI is not being transparent about how it collects users’ data during the post-training phase, such as in chat logs of their interactions with ChatGPT. “What’s really concerning is how it uses data that you give it in the chat,” says Leautier. People tend to share intimate, private information with the chatbot, telling it about things like their mental state, their health, or their personal opinions. Leautier says it is problematic if there’s a risk that ChatGPT regurgitates this sensitive data to others. And under European law, users need to be able to get their chat log data deleted, he adds. OpenAI is going to find it near-impossible to identify individuals’ data and remove it from its models, says Margaret Mitchell, an AI researcher and chief ethics scientist at startup Hugging Face, who was formerly Google’s AI ethics co-lead. The company could have saved itself a giant headache by building in robust data record-keeping from the start, she says. Instead, it is common in the AI industry to build data sets for AI models by scraping the web indiscriminately and then outsourcing the work of removing duplicates or irrelevant data points, filtering unwanted things, and fixing typos.


Executive Q&A: The State of Cloud Analytics

Many businesses are trying hard right now to stay profitable during these times of economic uncertainty. The startling takeaway to us was that business and technical leaders see cloud analytics as the tool -- not a silver bullet, but a critical component -- for staying ahead of the pack in the current economic climate. Not only that, organizations need to do more with less and, as it turns out, cloud analytics is not only a wise investment during good economic times, but also in more challenging economic times. Businesses reap benefits from the same solution (cloud analytics) in either scenario. For example, cloud analytics is typically more cost-effective than on-premises analytics solutions because it eliminates the need for businesses to invest in expensive hardware and IT infrastructure. It also offers the flexibility businesses need to quickly experiment with new data sources, analytics tools, and data models to get better insights -- without having to worry about the underlying infrastructure.


AI vs. machine learning vs. data science: How to choose

It's a common topic for organizational leaders—they want to be able to articulate the core differences between AI, machine learning (ML), and data science (DS). However, sometimes they do not understand the nuances of each and thus struggle to strategize their approach to things such as salaries, departments, and where they should allocate their resources. Software-as-a-Service (SaaS) and e-commerce companies specifically are being advised to focus on an AI strategy without being told why or what that means exactly. Understanding the complexity of the tasks you aim to accomplish will determine where your company needs to invest. It is helpful to quickly outline the core differences between each of these areas and give better context to how they are best utilized. ... To decide whether your company needs to rely on AI, ML, or data science, focus on one principle to begin: Identify the most important tasks you need to solve and let that be your guide.


The strong link between cyber threat intelligence and digital risk protection

ESG defined cyber threat intelligence as, “evidence-based actionable knowledge about the hostile intentions of cyber adversaries that satisfies one or several requirements.” In the past, this definition really applied to data on IoCs, reputation lists (e.g., lists of known bad IP addresses, web domains, or files), and details on TTPs. The intelligence part of DRP is intended to provide continuous monitoring of things like user credentials, sensitive data, SSL certificates, or mobile applications, looking for general weaknesses, hacker chatter, or malicious activities in these areas. For example, a fraudulent website could indicate a phishing campaign using the organization’s branding to scam users. The same applies for a malicious mobile app. Leaked credentials could be for sale on the dark web. Bad guys could be exchanging ideas for a targeted attack. You get the picture. It appears from the research that the proliferation of digital transformation initiatives is acting as a catalyst for threat intelligence programs. When asked why their organizations started a CTI program, 38% said “as a part of a broader digital risk protection effort in areas like brand reputation, executive protection, deep/dark web monitoring, etc.”


4 perils of being an IT pioneer

An enterprise-wide IT project is deemed successful only when a team member at the lowest level of the hierarchy adopts it. Ensuring adoption of any new solution is always a challenge. More so a solution based on a new technology. There’s push back from end users because they find the idea of losing power or skills in the face of new technology disconcerting. For any IT leader, crossing this mental inertia is always among the toughest challenges. Moreover, IT leaders have seen many initiatives based on new technologies fail because there was no buy-in from the company’s top leadership. Even if users adopt the new technology, the initially learning curve is often steep, impacting productivity. Most organizations can’t afford or aren’t ready to accept the temporary revenue loss due to the disruption caused by the new technology. Therefore, business and IT leaders must have a clear understanding of the risk/reward principle when rolling out new tech. Buy-in from top management as a top-down mandate can make adoption of new technology easier.


Is Generative AI an Enterprise IT Security Black Hole?

Shutting the door on generative AI might not be a possibility for organizations, even for the sake of security. “This is the new gold rush in AI,” says Richard Searle, vice president of confidential computing at Fortanix. He cited news of venture capital looking into this space along with tech incumbents working on their own AI models. Such endeavors may make use of readily available resources to get into the AI race fast. “One of the important things about the way that systems like GPT-3 were trained is that they also use common crawl web technology,” Searle says. “There’s going to be an arms race around how data is collected and used for training.” That may also mean increased demand for security resources as the technology floods the landscape. “It seems like, as in all novel technologies, what’s happening is the technology is racing ahead of the regulatory oversight,” he says, “both in organizations and the governmental level.”



Quote for the day:

"Our chief want is someone who will inspire us to be what we know we could be." -- Ralph Waldo Emerson

Daily Tech Digest - April 20, 2023

How to succeed as a fractional CIO

A fractional CIO is typically an experienced IT leader who is external to the enterprise yet acts as an accountable leader and extension of the executive team, says Dave Hartman, president of IT management consulting firm Hartman Executive Advisors. “A fractional CIO thinks beyond technical needs and considers the needs of the organization from a strategic business perspective.” ... Beyond expertise and management skills, a fractional CIO can provide an independent point of view to enterprise leadership in critical areas, such as emerging technologies and IT security, as well as updating or building a technology roadmap. “This can help key decision-making and can sometimes segue into providing more hands-on help in executing on the chosen path,” says Amelia Tyagi, co-founder and CEO of Business Talent Group. In some cases, an enterprise may turn to a fractional CIO to serve as an interim executive, assuming the leadership role for a fixed period of time. “Fractional and interim CIOs are particularly effective solutions for companies that are undergoing rapid change, or those that have an unexpected leadership gap” Tyagi explains.


AI Heightens Cyber Risk for Legacy Weapon Systems

Artificial intelligence's nascent centrality to offensive weapons development means the United States should take bold steps to ensure that adversaries are unable to develop their models, said Rand Corp. CEO Jason Matheny. "These AI models right now are very brittle," Matheny said. "We need to be thinking about ways that we can slow down progress elsewhere by doing things like adversarial attacks, data poisoning and model inversion. Let's use the tricks that we're seeing used against us and make sure that we understand the state of the art." Data poisoning - in which adversaries alter the data used to train AI models in order to distort the resulting algorithms - is already a risk for the United States, said Shift5 co-founder and CEO Josh Lospinoso. "These are real problems," he said. "We need to think clearly about shoring up those security vulnerabilities in our AI algorithms before we deploy these broadly and have to clean the mess up afterwards."


TUC says government is failing to protect workers from AI harms

While AI-powered workplace surveillance offers greater control to organisations over worker behaviour, Pakes said the increasing datafication of employees is also a “profit centre” for employers, which can sell the data on to third parties. “Not all of us, but many of us, can take our work just about anywhere now with technology, but it also means our work and our bosses can follow us just about everywhere, into our private lives, into our homes,” he said, adding that AI-powered surveillance is no longer restricted to the “canary in the coal mine” of logistics and warehouse workers. “It doesn’t matter if you’re blue collar or white collar, doesn’t matter if you’re in a factory, in the office or at home – this software can check us and track us and invade us, and we really need to talk about it.” Gina Neff, executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, said that as part of her research she has interviewed numerous economists who only offered a “collective shrug” when asked what they think the overall, long-term impact of AI will be on work.


ENISA’s Threat Landscape and the Effect of Ransomware

According to ENISA, cybersecurity threats continued to grow during the COVID-19 pandemic. The pandemic increased cybersecurity threats and attack surfaces. It also provided attackers opportunities to exploit the new normal, partly because of the growth in people’s online presence (e.g., social media), hybrid working models and the transition to more cloud-based solutions. The boom in the transportation industry’s courier, express and parcel (CEP) business was also a factor because, during the pandemic, CEP delivery services became a critical infrastructure. The acceleration in new artificial intelligence (AI) technology and advanced features (e.g., AI adaptability through machine learning [ML] and automated phishing email distributions) also spurred the growth of the cybersecurity threat. These cyberattacks become more mainstream, leading to more targeting of enterprises through home offices. In addition, state-backed or state-sponsored groups have taken advantage of the pandemic to conduct cyberespionage and implement COVID-19–related social engineering lures.


This new technology could blow away GPT-4 and everything like it

Known as Hyena, the technology is able to achieve equivalent accuracy on benchmark tests, such as question answering, while using a fraction of the computing power. In some instances, the Hyena code is able to handle amounts of text that make GPT-style technology simply run out of memory and fail. "Our promising results at the sub-billion parameter scale suggest that attention may not be all we need," write the authors. That remark refers to the title of a landmark AI report of 2017, 'Attention is all you need'. In that paper, Google scientist Ashish Vaswani and colleagues introduced the world to Google's Transformer AI program. The transformer became the basis for every one of the recent large language models. But the Transformer has a big flaw. It uses something called "attention," where the computer program takes the information in one group of symbols, such as words, and moves that information to a new group of symbols, such as the answer you see from ChatGPT, which is the output.


Skills-first hiring can increase talent pools by up to 20x

Generally, employers are looking for candidates with transferable in-demand skills such as leadership and specific technological abilities, Duke said. “In nearly all cases, employers will find that candidates always have skills that can be applied to a position, even if the candidate is coming from a vastly different industry. This increases the talent pool and makes it easier for employers to find good candidates,” she said. As an example of how the skills-first approach works, Duke said that when employers looking to hire digital marketing managers use this method, the available talent pool increases by almost 22x. “That’s because many of the skills associated with this job are common across other jobs and industries,” she explained. “In this case, about 30 separate job titles across the U.S. have relevant skills for this job, but most companies would overlook those candidates.” Every role at an organization can be broken down into a set of skills needed to do the job well. Every person has a set of skills, whether they’re an existing employee or part of an external talent pool, Duke said. 


Data privacy implications of ChatGPT

Although automated decision-making can be useful for organizations, there are serious concerns and risks to individuals subject to such processes, such as adverse legal effects based on processes they may not understand or that may be exacerbating and replicating biases and discriminatory practices. For example, the American Civil Liberties Union has opined that “AI is built by humans and deployed in systems and institutions that have been marked by entrenched discrimination . . . bias is in the data used to train the AI . . . and can rear its head throughout the AI’s design, development, implementation, and use.” Similar concerns were raised in a 2022 Constangy webinar on AI featuring Commissioner Keith Sonderling of the Equal Employment Opportunity Commission. Further, the Italian data protection authority is investigating additional data privacy implications of ChatGPT, such as whether it can comply with the GDPR, its legal basis for processing, collecting, and storing mass amounts of personal data, and its lack of age verification tools. In the meantime, Italy has temporarily banned ChatGPT.


Why is ETL Dying?

Traditional ETL pipelines have faced difficulties in supporting the agility required by modern analytics use cases, leaving business users waiting in line for their desired results. As a result, ETL pipelines are often viewed as a hindrance to better performance and businesses must carefully assess their current role and explore how they can be optimally leveraged in the contemporary analytics landscape. Traditional ETL processes require moving large amounts of data across various stages and systems, making them slow, demanding on resources, and prone to errors. This can be challenging for modern data-driven businesses to manage, as traditional ETL tools often come with a high price tag and demand substantial investments in hardware, software, and personnel resources. In contrast, newer data platforms present pre-built services and extensions that can lessen these expenses and enable enterprises to concentrate on providing meaningful outcomes to their users. For example, Google Data Stream is an instance of this approach, which is capable of managing real-time CDC with minimal coding or setup.


These medical IoT devices carry the biggest security risks

"Advances in technology are essential to improve the speed and quality of care delivery as the industry is challenged with a shortage of care providers, but with increasingly connected care comes a bigger attack surface," said Mohammad Waqas, Armis' principal solutions architect for healthcare. "Protecting every type of connected device, medical, IoT, even the building management systems, with full visibility and continuous contextualised monitoring is a key element to ensuring patient safety." The prevalence of unprotected devices comes as the healthcare sector continues to face fresh cybersecurity risks. The sector saw a 31% climb in threat activities between January and March this year compared to the previous quarter, according to Armis, citing figures from its intelligence platform. Other evidence suggests the healthcare sector is increasingly reliant on connected devices. ... Singapore's Cyber Security Agency (CSA) has also warned that critical IoT devices are potential targets in ransomware attacks, with cyber criminals recognising that the infection of these devices could lead to significant downtime costs and damage.


IBM takes a pragmatic approach to enterprise AI

IBM has integrated AI with its mainframes. The newest z16 Big Iron boasts an AI accelerator built onto its core Telum processor that can do 300 billion deep-learning inferences per day with one millisecond latency, according to IBM. The latest version of its z/OS operating system will include a new AI Framework for system operations to optimize IT processes, simplify management, improve performance, and reduce skill requirements. The new version will also support technologies to deploy AI workloads co-located with z/OS applications and will feature improved cloud capabilities. IBM said AI-powered workload management will intelligently predict upcoming workloads and react by allocating an appropriate number of batch runs, thus eliminating manual fine-tuning and trial-and-error approaches. “Systems are getting more and more complex, so we want to simply operations through with AI and automation by bringing a very prescriptive solution to our clients that will give them value out of the box and then much more,” Chopra said.



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith