Daily Tech Digest - May 05, 2022

Being a responsible CTO isn’t just about moving to the cloud

The reasons for needing to be a responsible CTO are just as strong as the need to be a tech-savvy one if a company wants to thrive in a digital economy. There are many facets to being a responsible CTO, such as making sure that code is being written in a diverse way, and that citizen data is being used appropriately. In a BCS webinar, IBM fellow and vice-president for technology in EMEA, Rashik Parmar, summarised that the three biggest forces driving unprecedented change today included post-pandemic work; digitalisation; and the climate emergency. With many organisations turning to technology to help solve some of the biggest challenges they’re facing today, it’s clear that there will need to be answers about how this tech-heavy economy will impact the environment. It makes sense that this is often the first place that a CTO will start when deciding how to drive a more responsible future. ... If we focus on the environmental considerations, it’s becoming more commonly known that whilst a move to the cloud may be better for reducing an organisation’s carbon emissions than running multiple on-premises systems, the initiative alone isn’t going to spell good news for climate change.


Frozen Neon Invention Jolts Quantum Computer Race

The group's experiments reveal that within optimization, the new qubit can already stay in superposition for 220 nanoseconds and change state in only a few nanoseconds, which outperform qubits based on electric charge that scientists have worked on for 20 years. "This is a completely new qubit platform," Jin says. "It adds itself to the existing qubit family and has big potential to be improved and to compete with currently well-known qubits." The researchers suggest that by developing qubits based on an electron's spin instead of its charge, they could develop qubits with coherence times exceeding one second. They add the relative simplicity of the device may lend itself to easy manufacture at low cost. The new qubit resembles previous work creating qubits from electrons on liquid helium. However, the researchers note frozen neon is far more rigid than liquid helium, which suppresses surface vibrations that can disrupt the qubits. It remains uncertain how scalable this new system is—whether it can incorporate hundreds, thousands or millions of qubits.


AI for Cybersecurity Shimmers With Promise, but Challenges Abound

There are definitely differences in opinions between business executives, who largely consider AI to be a perfect solution, and security analysts on the ground, who have to deal with the day-to-day reality, says Devo's Ollmann. "In the trenches, the AI part is not fulfilling the expectations and the hopes of better triaging, and in the meantime, the AI that is being used to detect threats is working almost too well," he says. "We see the net volume of alerts and incidents that are making it into the SOC analysts hands is continuing to increase, while the capacity to investigate and close those cases has remained static." The continuing challenges that come with AI features mean that companies still do not trust the technology. A majority of companies (57%) are relying on AI features more or much more than they should, compared with only 14% who do not use AI enough, according to respondents to the survey. In addition, few security teams have turned on automated response, partly because of this lack of trust, but also because automated response requires a tighter integration between products that just is not there yet, says Ollman.


Concerned about cloud costs? Have you tried using newer virtual machines?

“Customers are willing to pay more for newer GPU instances if they deliver value in being able to solve complex problems quicker,” he wrote. Some of this can be chalked up to the fact that, until recently, customers looking to deploy workloads on these instances have had to do so on dedicated GPUs, as opposed to renting smaller virtual processing units. And while Rogers notes that customers, in large part, prefer to run their workloads this way, that may be changing. Over the past few years, Nvidia — which dominates the cloud GPU market — has, for one, introduced features that allow customers to split GPUs into multiple independent virtual processing units using a technology called Multi-instance GPU or MIG for short. Debuted alongside Nvidia’s Ampere architecture in early 2020, the technology enables customers to split each physical GPU into up to seven individually addressable instances. And with the chipmaker’s Hopper architecture and H100 GPUs, announced at GTC this spring, MIG gained per-instance isolation, I/O virtualization, and multi-tenancy, which open the door to their use in confidential computing environments.


Attackers Use Event Logs to Hide Fileless Malware

The ability to inject malware into system’s memory classifies it as fileless. As the name suggests, fileless malware infects targeted computers leaving behind no artifacts on the local hard drive, making it easy to sidestep traditional signature-based security and forensics tools. The technique, where attackers hide their activities in a computer’s random-access memory and use a native Windows tools such as PowerShell and Windows Management Instrumentation (WMI), isn’t new. What is new is new, however, is how the encrypted shellcode containing the malicious payload is embedded into Windows event logs. To avoid detection, the code “is divided into 8 KB blocks and saved in the binary part of event logs.” Legezo said, “The dropper not only puts the launcher on disk for side-loading, but also writes information messages with shellcode into existing Windows KMS event log.” “The dropped wer.dll is a loader and wouldn’t do any harm without the shellcode hidden in Windows event logs,” he continues. “The dropper searches the event logs for records with category 0x4142 (“AB” in ASCII) and having the Key Management Service as a source.


Fortinet CEO Ken Xie: OT Business Will Be Bigger Than SD-WAN

"We definitely see OT as a bigger market going forward, probably bigger than SD-WAN," Xie tells investors Wednesday. "The growth is very, very strong. We do see a lot of potential, and we also have invested a lot in this area to meet the demand." Despite its potential, Fortinet's OT practice today is considerably smaller than its SD-WAN business, which has been a company priority for years. SD-WAN accounted for 16% of Fortinet's total billings in the quarter ended Dec. 31 while OT accounted for just 8% of total billings over that same time period. Fortinet last summer had the second-largest SD-WAN market share in the world, trailing only Cisco. Fortinet's OT success coincides with growing demand from manufacturers, which CFO Keith Jensen says is the one vertical that continues to stand out for the company. ... "The strength in manufacturing really speaks to the threat environment, ransomware, OT, and things of that nature," Jensen says. "Manufacturing is trying desperately to break into the top five of our verticals and it's getting closer and closer every quarter."


Meta has built a massive new language AI—and it’s giving it away for free

Meta AI says it wants to change that. “Many of us have been university researchers,” says Pineau. “We know the gap that exists between universities and industry in terms of the ability to build these models. Making this one available to researchers was a no-brainer.” She hopes that others will pore over their work and pull it apart or build on it. Breakthroughs come faster when more people are involved, she says. Meta is making its model, called Open Pretrained Transformer (OPT), available for non-commercial use. It is also releasing its code and a logbook that documents the training process. The logbook contains daily updates from members of the team about the training data: how it was added to the model and when, what worked and what didn’t. In more than 100 pages of notes, the researchers log every bug, crash, and reboot in a three-month training process that ran nonstop from October 2021 to January 2022. With 175 billion parameters (the values in a neural network that get tweaked during training), OPT is the same size as GPT-3. This was by design, says Pineau. 


Tackling the threats posed by shadow IT

Shadow IT can be tough to mitigate, given the embedded culture of hybrid working in many organizations, in addition to a general lack of engagement from employees with their IT teams. For staff to continue accessing apps securely from anywhere, at any time, and from any device, businesses must evolve their approach to organizational security. Given the modern-day working environment moves at such a fast pace, employees have turned en masse to shadow IT when the experience isn’t quick or accurate enough. This leads to the bypassing of secure networks and best practices and can leave IT departments out of the process. A way of controlling this is by deploying corporate managed devices that provide remote access, giving IT teams most of the control and removing the temptation for employees to use unsanctioned hardware. Providing them with compelling apps, data, and services with a good user experience should see a reduced dependence on shadow IT, putting IT teams back in the driving seat and restoring security. 


5 AI adoption mistakes to avoid

Every AI-related business goal begins with data – it is the fuel that enables AI engines to run. One of the biggest mistakes companies make is not taking care of their data. This begins with the misconception that data is solely the responsibility of the IT department. Before data is captured and input into AI systems, business subject matter experts and data scientists should be looped in, and executives should provide oversight to ensure the right data is being captured and maintained appropriately. It’s important for non-IT personnel to realize they not only benefit from good data in yielding quality AI recommendations, but their expertise is a critical input to the AI system. Make sure that all teams have a shared sense of responsibility for curating, vetting, and maintaining data. Data management procedures are also a key component of data care. ... AI requires intervention to sustain it as an effective solution over time. For example, if AI is malfunctioning or if business objectives change, AI processes need to change. Doing nothing or not implementing adequate intervention could result in AI recommendations that hinder or act contrary to business objectives.


SEC Doubles Cyber Unit Staff to Protect Crypto Users

The SEC says that the newly named Crypto Assets and Cyber Unit, formerly known as the Cyber Unit, in the Division of Enforcement, will grow to 50 dedicated positions. "The U.S. has the greatest capital markets because investors have faith in them, and as more investors access the crypto markets, it is increasingly important to dedicate more resources to protecting them," says SEC Chair Gary Gensler. This dedicated unit has successfully brought dozens of cases against those seeking to take advantage of investors in crypto markets, he says. ... "This is great news! A lot of the cryptocurrency market is against any regulations, including those that would safeguard their own value, but that's not the vast majority of the rest of the world. The cryptocurrency world is full of outright scams, criminals and ne'er-do-well-ers," says Roger Grimes, data-driven defense evangelist at cybersecurity firm KnowBe4. Grimes adds that even legal and very sophisticated financiers and investors are taking advantage of the immaturity of the cryptocurrency market.



Quote for the day:

"The very essence of leadership is that you have to have vision. You can't blow an uncertain trumpet." -- Theodore M. Hesburgh

Daily Tech Digest - May 04, 2022

The cloud data migration challenge continues - why data governance is job one

How can governance help? The role of governance is to define the rules and policies for how individuals and groups access data properties and the kind of access they are allowed. Yet people in an organization rarely operate according to well-defined roles. They perform in multiple roles, often provisionally. On-ramping has to happen immediately; off-ramping has to be a centralized function. One very large organization we dealt with discovered that departing employees still had access to critical data for seven to nine days! So how can data governance support more intelligent data security? After all, without governance, security would be arbitrary. Many organizations that employ security schemes struggle because such schemes tend to be either too loose or too tight and almost always too rigid (insufficiently dynamic). In this way, security can hinder the progress of the organization. Yet, given the complexity of data architecture today, it’s become impossible to manage security for individuals without a coherent and dynamic governance policy to drive security allowance or grants for exceptions to those rules. 


Cybersecurity and the Pareto Principle: The future of zero-day preparedness

There’s a good reason why software asset inventory and management is the second-most important security control, according to the Centers for Internet Security’s (CIS) Critical Security Controls. It’s “essential cyber hygiene” to know what software is running and being able to access that up-to-date information instantaneously. It’s as though you were a new master-at-arms for a local baron in the Middle Ages. Your first duty would be to map out the castle grounds that you are charged to protect. ... As we put Log4Shell behind us, let’s incorporate these lessons learned for a more prepared future. The allocation of resources by enterprise security teams needs to be more purposeful, as attackers become increasingly sophisticated and continue to have what feels like unlimited resources. The value added through clear visibility and real-time insights into your entire ecosystem becomes all the more important. Remember, the core scope of the security team is to create a secure IT ecosystem, mitigate the exploit of known vulnerabilities and monitor for any suspicious activity. 


Expect to see more online data scraping, thanks to a misinterpreted court ruling

What can and should IT do about that? Given that these are generally publicly-visible pages, it’s a problem. There are few technical methods to block scrapers that wouldn’t cause problems for the site visitors the enterprise wants. Years ago, I was managing a media outlet that was making a huge move to premium content, meaning that readers would now have to pay for selected premium stories. We ran into a problem. We couldn’t allow people to freely share premium content, as we needed people to buy those subscriptions. That meant that we blocked cut-and-paste and specifically blocked someone from saving the page as a PDF. But that meant that those pages also couldn’t be printed. (Saving as PDF is really printing to PDF, so blocking PDF downloads meant blocking all printers.) It took just a couple of hours before new premium subscribers screamed that they paid for access and they need to be able to print pages and read them at home or on a train. After quite a few subscribers threatened to cancel their paid subscriptions, we surrendered and reinstated the ability to print.


Unpatched DNS Bug Puts Millions of Routers, IoT Devices at Risk

The flaw affects the ubiquitous open-source Apache Log4j framework—found in countless Java apps used across the internet. In fact, a recent report found that the flaw continues to put millions of Java apps at risk, though a patch exists for the flaw. Though it affects a different set of targets, the DNS flaw also has a broad scope not only because of the devices it potentially affects, but also because of the inherent importance of DNS to any device connecting over IP, researchers said. DNS is a hierarchical database that serves the integral purpose of translating a domain name into its related IP address. To distinguish the responses of different DNS requests aside from the usual 5-tuple–source IP, source port, destination IP, destination port, protocol–and the query, each DNS request includes a parameter called “transaction ID.” The transaction ID is a unique number per request that is generated by the client and added in each request sent. It must be included in a DNS response to be accepted by the client as the valid one for request, researchers noted. “Because of its relevance, DNS can be a valuable target for attackers,” they observed.



Managed services vs. hosted services vs. cloud services: What's the difference?

Managed service providers (MSPs) existed first - before we were talking about the big public cloud providers. “I’ve seen some definitions where MSPs are a superset and all CSPs are MSPs, but not all MSPs are CSPs. That seems a reasonable definition to me,” says Miniman. One historical example of a managed service provider you may know is Rackspace: Their company name literally reflected that you were buying space in their rack to run workloads. The way their business started out was as a hosted service: Your server ran in Rackspace’s data center. But Rackspace also offered other types of services to customers - managed services. ... “When I think of a hosted environment, that is something dedicated to me,” says Miniman. “So traditionally, there was a physical machine…that maybe had a label on it. But definitely from a security standpoint, it was “company X is renting this machine that is dedicated to that environment.” Public cloud service providers sell hundreds of services: You can think of those as standard tools, just like you’d find standard metric tools walking into any hardware store.


Making Agile Work in Asynchronous and Hybrid Environments

The ideal state for asynchronous teams is to remain aligned passively - or with little effort - eliminating the need for frequent meetings or lengthy documentation of the minutiae of every project. To pull this off, visual collaboration should be a key element of Agile management for teams that are working remotely and asynchronously. Visual collaboration brings the ease of alignment of the whiteboard into the digital workplace, giving developers a living artifact of project plans that can include diagrams, UX mockups, embedded videos, and other communication tools that can make async work nearly error-proof. Our team at Miro uses a variety of visual tools to manage our development, and many of these tools are available as free templates that other teams can use. The agile product roadmap helps prioritize work and shift tasks as priorities change. And the product launch board helps our team visually align design, development, and GtM teams as we come down to the wire on a new launch. The shared nature of these tools gives us confidence as we work.

Three steps to an effective data management and compliance strategy

Businesses clearly need to know more about their data to meet compliance needs, but the challenge is sorting through the noise in all the volume. Data analytics is essential for enterprises looking to increase efficiency, improve business decision-making and attain that important competitive edge while still ensuring that they comply with today’s data standards. However, while big data can add significant value to the decision-making process, supporting large volumes of unstructured data can be complex, as inadequate data management and data protection introduce unacceptable levels of risk. The emergence of DataOps, which is an automated and process-oriented methodology aimed at improving the quality of data analytics, further supports the requirement for enhanced data management. Driving faster and more comprehensive analytics is key to leveraging value from data, but this can only be done if data is managed correctly, the right governance protocols are in place, and data quality is kept to the highest standard.


5 key industries in need of IoT security

The growth of IoT has spurred a rush to deploy billions of devices worldwide. Companies across key industries have amassed vast fleets of connected devices, creating gaps in security. Today, IoT security is overlooked in many areas. For example, a sizable percentage of devices share the userID and password of “admin/admin” because their default settings are never changed. The reason security has become an afterthought is that most devices are invisible to organizations. Hospitals, casinos, airports, cities, etc. simply have no way of seeing every device on their networks. ... Cities rely on 1.1 billion IoT devices for physical security, operating critical infrastructure from traffic control systems, street lights, subways, emergency response systems and more. Any breach or failure in these devices could pose a threat to citizens. You see it in the movies: brilliant hackers control the traffic lights across a city, with perfect timing, to guide an armored vehicle into a trap. Then there’s real life; for instance, when a hacker in Romania took control of Washington DC’s outside video cameras days before the Trump inauguration.


Getting strategy wrong—and how to do it right instead

Making matters more complex, especially in areas of public policy and defense, real-life leaders do not have a neat economist’s single measure of value. Instead, they are faced with a bundle of conflicting ambitions—a group of desires, goals, intents, values, and fears—that cannot all be satisfied simultaneously. Forging a sense of purpose from this bundle is part of the gnarly problem. Making matters most complex is the fact that the connection between potential actions and actual outcomes is unclear. A gnarly challenge is not solved with analysis or the application of preset frameworks. A coherent response arises only through a process of diagnosing the nature of the challenges, framing, reframing, chunking down the scope of attention, referring to analogies, and developing insight. The result is a design, or creation, embodying purpose. I call it a creation because it is often not obvious at the start, the product of insight and judgment rather than an algorithm. Implicit in the concept of insightful design is that knowledge, though required, is not, by itself, sufficient.


Understand the 3 P’s of Cloud Native Security

The movement to shift security left has empowered developers to find and fix defects early so that when the application is pushed into production, it is as free as possible from known vulnerabilities at that time… But shifting security left is just the beginning. Vulnerabilities arise in software components that are already deployed and running. Organizations need a comprehensive approach that spans left and right, from development through production. While there’s no formulaic one-size-fits-all way to achieve end-to-end security, there are some worthwhile strategies that can help you get there. ... Shifting left can help organizations develop applications with security in mind. But no matter how confident you are in the security of an application when it leaves development, there is no guarantee that it will remain secure in production. We have seen on a large scale that vulnerabilities are often disclosed well after being deployed to production. Reminders include Apache Struts, Heartbleed, and, most recently, Log4j, which was first published in 2013 but discovered just last year.




Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor

Daily Tech Digest - May 03, 2022

5 Elements of High-Quality Software

The software architecture initially puts a foundation for a software project and lets programmers work on the particular project for many years. The entire software system’s lifetime, maintainability, and market success depend on this foundation. Late architectural changes are usually time-consuming and costly for any software development team. In other words, it’s literally impossible to change the foundation once a house is built on top of it. Therefore, we always need to strive to select the optimal architectural pattern with the first implementation attempt. It’s indeed better to spend more time on early architectural decisions rather than spending your time fixing the side-effects of instant non-optimal architectural decisions. First, work on the software system’s core functionality and stabilize — at this stage, architectural changes are not much time-consuming. Next, work on features by using the software core’s functionality. Even though you use a monolithic pattern, you can detach the core logic from features at the source-code level — not from the architectural level.


SSE kicks the ‘A’ out of SASE

Now comes security service edge (SSE), which pulls back the security functions in SASE into a unified services offering that includes CASB, zero-trust network architecture (ZTNA) and secure web gateway (SWG). SSE came in the wake of the COVID-19 pandemic, with most employees being sent home to work and putting in motion the ongoing trend toward hybrid work. With many people working from home at least part of the time, the role of branch offices is lessened and the need for security features that follow workers where they are – with work days starting from home and then moving to offices or other locations – is growing. What the role of SSE is in the larger network security space is and what it means for the future of SASE are the subjects of some debate in the industry. However, it puts a spotlight on the ongoing evolution of networking as the definition of work continues to change and the focus of IT shifts from the traditional central data center data and workloads in the cloud and at the edge. Once the pandemic hit, "it was no longer about branch offices," said John Spiegel, director of strategy at Axis Security, which in April launched Atmos, its SSE platform.


What Is Zero Trust Network Access (ZTNA)?

To begin, the idea behind the entire zero trust network access starts with the assumption that cybersecurity attacks can be a result of who is internal and who is external to the network. A traditional IT network trusts pretty much everything while a zero trust architecture network literally means “trust no one” including systems, users, software, and machines. Zero trust network access verifies a user’s identity and privileges and forces both users and devices to be continuously monitored and re-verified to maintain access. For example, let’s say that you log in to your bank account via a mobile device or even your laptop computer. Once you check your balance, you open a new tab to continue something else outside of the bank account screen. After a while, that tab will produce a pop-up with a timeout warning asking if you want to continue or log out. If you don’t reply in time, it will automatically log you out of the screen and you will be forced to log back in if you want to access your bank account details again.


Determining “nonnegotiables” in the new hybrid era of work

Skill development is another function that takes place at the group level. So, in the hybrid era, it’s also important to avoid losing those opportunities. As Degreed’s Chief Learning and Talent Officer Kelly Palmer wrote for the World Economic Forum, it’s helpful to use hybrid employees’ time at the office for “collaborative projects in which their new skills can be put to work,” while “fully remote companies can organize virtual collaborations.” Prioritizing development on both the individual and team levels is also a nonnegotiable because of the challenges presented to organizations by skill gaps. “Half of all employees around the world will need reskilling by 2025—and that number does not include all the people who are currently not in employment,” PwC Global Chairman Robert E. Moritz and World Economic Forum Managing Director Saadia Zahidi wrote in the 2021 report Upskilling for Shared Prosperity.


Things that will remain in the inkwell in the new European regulation of artificial intelligence

The European Union took a step forward and a year ago presented a proposal for a pioneering regulation in the world, which divides AI technologies into four categories based on the risk they may pose to citizens. But some experts point out that there are complex applications that, in their current wording, could be left out of regulation. Health, autonomous cars and weapons, among others. The EU debates the last fringes of the regulations on AI, which could be ready in 2023. A regulation that is “unique in the world” due to its characteristics, although it leaves important aspects in the shadows, says Lucía Ortiz de Zárate, Researcher in Ethics and Governance of Artificial Intelligence at the Autonomous University of Madrid. Ortiz de Zárate has submitted, together with the Fundación Alternativas, comments on the Commission’s proposal. ... This researcher misses the fact that there are sensitive sectors that are not included in the most closely watched artificial intelligence classifications, as is the case of health. 


How To Re-Architect Four Business Components With Digital Transformation

Going paperless and modernizing IT won't drive digital transformation on their own. On the contrary, true digital transformations encompass reevaluating current business processes and re-architecting them from the ground up to effectuate radical change. The key to successful digital transformation is to establish and seamlessly intertwine four core pillars: technology and infrastructure, business processes and models, customer experience and organizational culture. In my experience as an entrepreneur operating a digital transformation agency, high-performing organizations and digital leaders are able to continuously re-evaluate their core, identify weaknesses and opportunities and guide their teams through the ongoing transformation of all four pillars simultaneously to achieve defined goals. Whether it's an implementation of AI-driven analytics or a new customer portal, all components of the four pillars need to be considered and transformed in unison to achieve transformation goals and deliver tangible results. Initiatives that touch only the technology or infrastructure may drive improvement, but they're rarely transformative.


Deep Dive: Protecting Against Container Threats in the Cloud

Container technology, like other types of infrastructure, can be compromised in a number of different ways – however, misconfiguration reigns atop the initial-access leaderboard. According to a recent Gartner analysis, through 2025, more than 99 percent of cloud breaches will have a root cause of customer misconfigurations or mistakes. “Containers are often deployed in sets and in very dynamic environments,” Nunnikhoven explained. “The misconfiguration of access, networking and other settings can lead to an opportunity for cybercriminals.” Trevor Morgan, product manager at comforte AG, noted that companies, especially smaller companies, are generally using default configuration settings vs. more sophisticated and granular configuration capabilities: “Basic misconfigurations or accepting default settings that are far less secure than customized settings.” That can lead to big (and expensive) problems. For instance, last June the “Siloscape” malware was discovered, which is the first known malware to target Windows containers. It breaks out of Kubernetes clusters to plant backdoors, raid nodes for credentials or even hijack an entire database hosted in a cluster. ...”


DAOs: A blockchain-based replacement for traditional crowdfunding

Digital crowdfunding platforms like GoFundMe, Patreon and Kickstarter have enjoyed massive patronage over the past 10 years. This growth can be attributed primarily to the nature of crowdfunding which is set up with minimal risk. This risk is spread across all contributors of a particular idea or startup. Start-ups with financial needs will find that getting funding from traditional institutions is no easy feat. These institutions take on quite a lot of the risk involved in financing business ideas that could end badly. With a global economy still reeling from the pandemic, the accessibility and much less bureaucratic nature of DAOs as a tool for crowdfunding have been a primal factor in its growth. Digitalized crowdfunding in the form of DAOs has eliminated some traditional limits of the financing form. The simplicity makes it a disruptive force to traditional crowdfunding methods. Emmet Halm dropped out of Harvard to found DAOHQ. DAOHQ bills itself as the first marketplace for DAOs where users can find information about any DAO. 


A regular person’s guide to the mind-blowing world of hybrid quantum computing

Quantum computers allow us to harness the power of entanglement. Instead of waiting for one command to execute, as binary computers do, quantum computers can come to all of their conclusions at once. In essence, they’re able to come up with (nearly) all the possible answers at the same time. The main benefit to this is time. A simulation or optimization task that might take a supercomputer a month to process could be completed in mere seconds on a quantum computer. The most commonly cited example of this is drug discovery. In order to create new drugs, scientists have to study their chemical interactions. It’s a lot like looking for a needle in a never-ending field of haystacks. There are near-infinite possible chemical combinations in the universe, sorting out their individual combined chemical reactions is a task no supercomputer can do within a useful amount of time. Quantum computing promises to accelerate these kinds of tasks and make previously impossible computations commonplace. But it takes more than just expensive, cutting-edge hardware to produce these ultra-fast outputs.


Go Language Riding High with Devs, But Has a Few Challenges

Among the most significant technical barriers to increased Go language adoption are missing features and lack of ecosystem/library support. “We asked for more details on what features or libraries respondents were missing and found that generics was the most common critical missing feature — we expect this to be a less significant barrier after the introduction of generics in Go 1.18,” wrote Alice Merrick, a user experience researcher at Google, in a post on the Go Blog discussing the 2021 survey. “The next most common missing features had to do with Go’s type system.” The Go community added generics to the Go language in release 1.18 of the language. Release 1.18, delivered last month, provides new features to enhance security and developer productivity, and improve the performance of Go. Steve Francia, Google Cloud’s Product & Strategic Lead for Go, called the new update “monumental” and said generics was the most sought-after feature by developers. “With generics, this specific feature has been the most sought-after feature in go for the last 10 years,” Francia said.



Quote for the day:

"It takes an influential leader to excellently raise up leaders of influence." -- Anyaele Sam Chiyson

Daily Tech Digest - May 02, 2022

The Time Travel Method of Debugging Software

By removing the preconceived notions about how challenging programming is, Jason Laster became more confident in building a developer-friendly debug tool. “We want to make software more approachable,” he said. “We want more people to feel like they can program and do things that don’t require a math degree.” He went on to say, “Imagine being a Project Manager and asking your engineer why something broke and receiving a long explanation that still leaves your question unanswered. Using Replay, they can share the URL with the engineers who can just go in and leave a comment. Now, the PM can recognize the function and identify what went wrong on their own. If anybody along the way can record the issue with Replay, then everyone downstream can look at the replay, debug it and see exactly what went wrong.” Acknowledging that it’s easy to mistake Replay as another browser recorder tool, Laster explained how Replay differs. “On one end of the spectrum, you have something like a video recorder, then go along that spectrum a little bit further and you have something like a session replay tool and observability tool.


Software AI Accelerators: AI Performance Boost for Free

The increasing diversity of AI workloads has necessitated a business demand for a variety of AI-optimized hardware architectures. These can be classified into three main categories: AI-accelerated CPU, AI-accelerated GPU, and dedicated hardware AI accelerators. We see multiple examples of all three of these hardware categories in the market today, for example Intel Xeon CPUs with DL Boost, Apple CPUs with Neural Engine, Nvidia GPUs with tensor cores, Google TPUs, AWS Inferentia, Habana Gaudi and many others that are under development by a combination of traditional hardware companies, cloud service providers, and AI startups. While AI hardware has continued to take tremendous strides, the growth rate of AI model complexity far outstrips hardware advancements. About three years ago, a Natural Language AI model like ELMo had ‘just’ 94 million parameters whereas this year, the largest models reached over 1 trillion parameters. 


Cybersecurity in the digital factory for manufacturers

Many companies are extremely hesitant about introducing the Industrial Internet of Things (IIoT) or cloud systems because they believe it will open the door to cybercriminals. What they fail to realize is they’re already facing this danger every day. A simple email with an attachment or a link can result in the encryption of all the information on a server. You’re at risk even if you haven’t implemented an entire ecosystem connecting customers and suppliers. That’s why it’s essential that you’re aware of the threats and be ready to respond quickly in the event of a cyberattack. Cybersecurity is currently on everyone’s lips. In many widely publicized cases, large companies have fallen victim to cyberattacks that compromised their operations in one way or another. In some of these cases, the companies’ security policies had not kept up with the past decade’s rapid changes in the use of digital technologies and tools. They mistakenly thought a cyberattack could only affect others. The sheet metal processing sector is no exception to this reality.


Chaos Engineering and Observability with Visual Metaphors

Monitoring and observability have become one of the most essential capabilities for engineering teams and in general for modern digital enterprises who want to deliver excellence in their solutions. Since there are many reasons to monitor and observe the systems, Google has documented Four Golden Signals or metrics that define what it means for the system to be healthy and that are the foundation for the current state of the observability and monitoring platforms. The four metrics are described below: Latency is the time that a service takes to service a request. It includes HTTP 500 errors triggered due to loss of connection to a database or other critical backend that might not be served very quickly. Latency is a basic metric since a slow error is even worse than a fast error. Traffic is a measure of how much demand is being placed on the system. It determines how much stress is the system taking at a given time from users or transactions running through the service. For a web service, for example, this measurement is usually HTTP requests per second. 


Reimagining the Post Pandemic Future: Leveraging the benefits of Hyperautomation

As the world emerges from the impact of the pandemic, hyperautomation solutions will power digital self-services to take center stage connecting businesses with customers. With customers opening bank accounts remotely, consulting doctors online, interacting with governments via citizen self-serve, and so on, the scope of tech-enabled services keeps expanding from time to time. All this implies that there will be a gradual shift away from the traditional back-office towards self-serve. From a hyperautomation standpoint, this shift will see a considerable boost from low-code platforms with favorable B2C type interactions. Rich and sophisticated user experiences centered around simplicity and ease of use will be in demand. New user experiences will break ground allowing more flexibility and improved speed-to-solution. In addition to B2C type low-code portals, Artificial Intelligence (AI) and analytics will be in demand. For example, organizations will deploy AI technologies heavily to assist customer interactions. 


UK regulators seek input on algorithmic processing and auditing

On the benefits and harms of algorithms, the DRCF identified “six cross-cutting focus areas” for its work going forward: transparency of processing; fairness for those affected; access to information products, services and rights; resilience of infrastructure and systems; individual autonomy for informed decision-making; and healthy competition to promote innovation and better consumer outcomes. On algorithmic auditing, the DRCF said the stakeholders pointed to a number of issues in the current landscape: “First, they suggested that there is lack of effective governance in the auditing ecosystem, including a lack of clarity around the standards that auditors should be auditing against and around what good auditing and outcomes look like. “Second, they told us that it was difficult for some auditors, such as academics or civil society bodies, to access algorithmic systems to scrutinise them effectively. Third, they highlighted that there were insufficient avenues for those impacted by algorithmic processing to seek redress, and that it was important for regulators to ensure action is taken to remedy harms that have been surfaced by audits.”


Developer experience doesn’t have to stop at the front end

“It is natural to see providers making it easier for developers to do those things and that is where we get into infrastructure meeting software development,” RedMonk analyst James Governor told InfoWorld. “At the end of the day, you need platforms to enable you to be more productive without manually dealing with Helm charts, operators, or YAML.” Improving the back-end developer experience can do more than improve the lives of back-end developers. Providing better, more intuitive tools can enable back-end developers to get more done, while also bringing down barriers to allow a wider cohort of developers to manage their own infrastructure through thoughtful abstractions. “Developer control over infrastructure isn’t an all-or-nothing proposition,” Gartner analyst Lydia Leong wrote. “Responsibility can be divided across the application lifecycle, so that you can get benefits from “you build it, you run it” without necessarily parachuting your developers into an untamed and unknown wilderness and wishing them luck in surviving because it’s not an ‘infrastructure and operations team problem’ anymore.”


As supply chains tighten, logistics must optimize with AI

Before jumping the gun, identify your bottlenecks, understand the delivery systems available and discover the root cause of the congestion. Factors to analyze are the capacity of your shipping mediums, your warehouse management, average delivery time and the accuracy of your demand predictions. Only by understanding your current capabilities and inefficiencies will you be able to deploy the appropriate technology. Build your systems in an orderly manner: Build out your technology step by step. This is vital since some companies assume that adding multiple solutions and automating everything at once will reap the best results. This is not the case. ... Overall, applying AI analytics to problems will help you optimize elements like your optimal warehouse capacity, transportation utilization and delivery times. At some point, however, business leaders have to choose between tradeoffs. Is the main goal to keep costs low or to increase delivery speed? Are long transport distances to be avoided due to emissions? While AI can show which alternatives are more cost-effective or climate-friendly, companies will have to make the ultimate decision about their business trajectory.


SOC modernization: 8 key considerations

When an asset is under attack, security analysts need to understand if it is a test/development server or a cloud-based workload hosting a business-critical application. To get this perspective, SOC modernization combines threat, vulnerability, and business context data for analysts. ... Cisco purchased Kenna Security for risk-based vulnerability management, Mandiant grabbed Intrigue for attack surface management, and Palo Alto gobbled up Expanse Networks for ASM as well. Meanwhile, SIEM leader Splunk provides risk-based alerting to help analysts prioritize response and remediation actions. SOC modernization makes this blend a requirement. ... SOC modernization includes a commitment to constant improvement. This means understanding threat actor behavior, validating that security defenses can counteract modern attacks, and then reinforcing any defensive gaps that arise. CISOs are moving toward continuous red teaming and purple teaming for this very purpose. In this way, SOC modernization will drive demand for continuous testing and attack path management tools from vendors like AttackIQ, Cymulate, Randori, SafeBreach, and XMCyber.


Challenging misconceptions around a developer career

Experience counts a lot for developers, just as it does for pilots or surgeons. Technical experience is relatively easy to pick up, but the experiences that build instinct in the best developers are rarely gained alone. Developers work with others and learn from one another along the way. They seek collaboration on difficult problems and offer thoughtful feedback and suggestions on work in progress. Ultimately, developer tools are built for collaboration, encouraging the exchange of comments and open discussion. There are so many misconceptions about successful developers. Some of them may have some truth to them, while others are outdated or were completely false in the first place. The idea of developers as antisocial individuals is not always accurate. Developers are more often creative problem solvers who combine creativity with deep skills to tackle the task at hand. The most successful developers combine emotional intelligence with hard work and a curiosity for learning something new – and they help others around them to do the same.




Quote for the day:

"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward

Daily Tech Digest - May 01, 2022

The metaverse is a transformational opportunity for the business world

The idea of the metaverse gives enterprise software developers a roadmap to build software based on a single digital identity where companies join a network to connect with other companies who have likewise joined. This is not a system that belongs to any one company, but an environment where all companies are equal. Why do businesses want to connect? Because that’s the nature of a business at its essence, connecting with customers, suppliers, and any stakeholder to establish an expectation of value from the relationship, and then measure the realization of this value over time. This thinking is not inconsistent with the idea of a VR environment where a business user engages with some part of their business in an immersive environment. But we are setting the aperture much wider to say the entire business should be thought of as being part of the metaverse, and all of the data that exists about that business can be aimed at that digital identity to create a digital twin for the entire business. Then this digital business can connect with other businesses to do what businesses do — exchange value — but is now supported by a persistent, interoperable, collaborative digital space that is co-created and co-owned by those companies who have joined the metaverse.


Cognitive Biases About Leadership and How to Survive Them

We develop cognitive biases based on our life experience. Just as we expect teachers to be good with kids and surgeons to have a steady hand, we also hold behavioral expectations for our leaders. Today’s emphasis on servant leadership has us all believing that leaders are heroes, existing to serve the people and their every action should be a selfless gesture. Then, when they fail to act in accordance with our beliefs, we become disillusioned — the hero has fallen and everything they ever did, good or bad, gets lumped into one big giant disappointment. That’s a lot of burden for a leader to bear. Instead of looking at leaders as one whole unit, we need to see them as a collection of basic human traits. We forget that within every leader is a person, with flaws and imperfections. Instead of putting the whole person on a pedestal as some kind of one-size-fits-all embodiment of goodness, just admire them for their strengths. Unpack what you like about them without discarding the whole leader. Take the good they accomplished for what it is, but don’t blame humans for not being angels.
 

Data Is The New Business Fuel, But It Requires Sound Risk Management

Today’s remote or hybrid work model poses a whole new set of security challenges. Many companies can minimize risk by leveraging a multicloud strategy, but the risk associated with malware or ransomware can compromise crucial corporate and customer data. Despite this, according to a report from Menlo Security, only 27% of organizations have advanced threat protection in place for all endpoint devices with access to company data. It’s crucial that companies deploy advanced cybersecurity software and also train employees on acceptable use of public or home-based Wi-Fi usage. While enterprise data provides the fuel that drives accurate AI, it’s important that data scientists ensure that bias doesn’t creep into the algorithms that are developed. Data should be analyzed to ensure that it is diverse and doesn’t lead to any decisions that could provide an unfair advantage to certain populations. As an example, AI that helps to determine the best suppliers to work with should be trained with diverse supplier data. Speaking of suppliers, it’s not enough that data has proper governance within the organization. 


How Aurora Serverless made the database server obsolete

Amazon Aurora Serverless v1 changed everything by enabling customers to resize their VMs without disrupting the database. It would look for gaps in transaction flows that would give it time to resize the VM. It would then freeze the database, move to a different VM behind the scenes, and then start the database again. This was a great starting point, explains Biswas, but finding transaction gaps isn't always easy. "When we have a very chatty database, we are running a bunch of concurrent transactions that overlap," he explains. "If there's no gap between them, then we can't find the point where we can scale." Consequently, the scaling process could take between five and 50 seconds to complete. It could sometimes end up disrupting the database if an appropriate transaction gap could not be found. That restricted Aurora Serverless instances to sporadic, infrequent workloads. "One piece of feedback that we heard from customers was that they wanted us to make Aurora Serverless databases suitable for their most demanding, most critical workloads," explained Biswas.


The ever-expanding cloud continues to storm the IT universe

VMware Inc. several years ago cleaned up its fuzzy cloud strategy and partnered up with everyone. And you see above, VMware Cloud on AWS doing well, as is VMware Cloud, its on-premises offering. Even though it’s somewhat lower on the X-axis relative to last quarter, it’s moving to the right with a greater presence in the data set. Dell and HPE are also interesting. Both companies are going hard after as-a-service with APEX and GreenLake, respectively. HPE, based on the survey data from ETR, seems to have a lead in spending momentum, while Dell has a larger presence in the survey as a much bigger company. HPE is climbing up on the X axis, as is Dell, although not as quickly. And the point we come back to often is that the definition of cloud is in the eye of the customer. AWS can say, “That’s not cloud.” And the on-prem crowd can say, “We have cloud too!” It really doesn’t matter. What matters is what the customer thinks and in which platforms they choose to invest. That’s why we keep circling back to the idea of supercloud. You are seeing it evolve and you’re going to hear more and more about it. 


Solving Business Problems With Blockchain

Smart contracts are one of the applications of blockchain that can vastly help companies in securing a deal. By using smart contracts, companies can form an electrical code that assists organizations to develop a venture in a conflict-free manner. Unlike traditionally, if a company tries to change the terms of the contract or denies to release a payment, everybody on the network can leverage the technology’s transparency to view the same, and the contract’s code automatically freezes the deal. The agreement would not continue further until the company pays the due or goes back to keeping up with the guidelines. This smart management of contracts helps businesses to maintain operations functioning without any friction. As blockchain is a technology that increases transparency, keeping track of the incoming and outgoing products from the site can be managed efficiently by everyone on the network. Every time a product halts at a specific gateway, the same gets documented and inserted into the blockchain ledger. This documentation increases transparency on cargo status and ensures they reach retailers on time and intact in condition.


The Future of Health Data Management: Enabling a Trusted Research Environment

TRE is becoming a commonly used acronym among the science and research community. In general, a TRE is a centralized computing database that securely holds data and allows users to gain access for analysis. TREs are only accessed by approved researchers and no data ever leaves the location. Because data stays put, the risk of patient confidentiality is reduced. ... TREs are becoming the architectural backbone for health data in many research organizations. While this is a step in the right direction, many TREs still can’t speak to colleagues from other organizations, or even other departments within their own organization. ... As the genomic sector continues to grow, the capability of TREs to communicate will allow researchers and scientists to effectively collaborate to overcome life threatening diseases and diagnosis by breaking down the “silos” of health data. That doesn’t mean moving data. Life sciences data sets are too large to move efficiently – and to complicate matters, many data security regulations forbid data to leave an organization, state or nation.


Designing Societally Beneficial Reinforcement Learning Systems

As an RL agent collects new data and the policy adapts, there is a complex interplay between current parameters, stored data, and the environment that governs evolution of the system. Changing any one of these three sources of information will change the future behavior of the agent, and moreover these three components are deeply intertwined. This uncertainty makes it difficult to back out the cause of failures or successes. In domains where many behaviors can possibly be expressed, the RL specification leaves a lot of factors constraining behavior unsaid. For a robot learning locomotion over an uneven environment, it would be useful to know what signals in the system indicate it will learn to find an easier route rather than a more complex gait. In complex situations with less well-defined reward functions, these intended or unintended behaviors will encompass a much broader range of capabilities, which may or may not have been accounted for by the designer. ... While these failure modes are closely related to control and behavioral feedback, Exo-feedback does not map as clearly to one type of error and introduces risks that do not fit into simple categories. 


Don’t Fear Artificial Intelligence; Embrace it Through Data Governance

Data-centric AI is evolving, and should include relevant data management disciplines, techniques, and skills, such as data quality, data integration, and data governance, which are foundational capabilities for scaling AI. Further, data management activities don’t end once the AI model has been developed. To support this, and to allow for malleability in the ways that data is managed, HPE has launched a new initiative called Dataspaces, a powerful cloud-agnostic digital services platform aimed at putting more control into the hands of data producers and curators as they build intelligent systems. Addressing, head on, the data gravity and compliance considerations that exist for critical datasets, Dataspaces gives data producers and consumers frictionless access to the data they need, when they need it, supporting better integration, discovery, and access, enhanced collaboration, and improved governance to boot. This means that organisations can finally leverage an ecosystem of AI-centric data management tools that combine both traditional and new capabilities to prepare the enterprise for success in the era of decision intelligence.


How DAOS Are Changing Leadership

Traditionally, top-down leadership comes to those who either already have power or the ability to purchase it. Since everyone has equal shares in a DAO, authority is not "given" to anyone. Instead, it's earned by the merits of the proposals made. This creates an organization that follows the guidance of someone people are voluntarily following. This always yields better results, whether through growth, innovation or higher profits. This style of leadership is something all good leaders can practice. Even if they didn't "earn" their role in the same way, they can earn the trust and loyalty of their team through their actions. ... Modern corporations are like enormous ships that require huge amounts of time and effort to change course. There is endless red tape and bureaucracy to navigate before any real change can be implemented. Because DAOs are more democratic, changes can be proposed and implemented with relatively little hassle. While DAOs are primarily based on the division of funds, leaders can still note how the process works and see how efficient it is. The level of efficiency DAOs create is something that great leaders can seek to replicate in their own organizations.
 


Quote for the day:

"Challenges in life always seek leaders and leaders seek challenges." -- Wayde Goodall