Daily Tech Digest - August 29, 2022

6 key board questions CIOs must be prepared to answer

The board wants assurances that the CIO has command of tech investments tied to corporate strategy. “Demystify that connection,” Ferro says. “Show how those investments tie to the bigger picture and show immediate return as much as you can.” Global CIO and CDO Anupam Khare tries to educate the board of manufacturer Oshkosh Corp. in his presentations. “My slide deck is largely in the context of the business so you can see the benefit first and the technology later. That creates curiosity about how this technology creates value,” Khare says. “When we say, ‘This project or technology has created this operating income impact on the business,’ that’s the hook. Then I explain the driver for that impact, and that leads to a better understanding of how the technology works.” Board members may also come in with technology suggestions of their own that they hear about from competitors or from other boards they’re on. ... Avoid the urge to break out technical jargon to explain the merits of new cloud platforms, customer-facing apps, or Slack as a communication tool, and “answer that question from a business context, not from a technology context,” Holley says. “


From applied AI to edge computing: 14 tech trends to watch

Mobility has arrived at a “great inflection” point — a shift towards autonomous, connected, electric and smart technologies. This shift aims to disrupt markets while improving efficiency and sustainability of land and air transportation of people and goods. ACES technologies for road mobility saw significant adoption during the past decade, and the pace could accelerate because of sustainability pressures, McKinsey said. Advanced air-mobility technologies, on the other hand, are either in pilot phase — for example, airborne-drone delivery — or remain in the early stages of development — for example, air taxis — and face some concerns about safety and other issues. Overall, mobility technologies, which attracted $236bn last year, intend to improve the efficiency and sustainability of land and air transportation of people and goods. ... It focuses on the use of goods and services that are produced with minimal environmental impact by using low carbon technologies and sustainable materials. At a macro level, sustainable consumption is critical to mitigating environmental risks, including climate change. 


Why Memory Enclaves Are The Foundation Of Confidential Computing

Data encryption has been around for a long time. It was first made available for data at rest on storage devices like disk and flash drives as well as data in transit as it passed through the NIC and out across the network. But data in use – literally data in the memory of a system within which it is being processed – has not, until fairly recently, been protected by encryption. With the addition of memory encryption and enclaves, it is now possible to actually deliver a Confidential Computing platform with a TEE that provides data confidentiality. This not only stops unauthorized entities, either people or applications, from viewing data while it is in use, in transit, or at rest. ... It effectively allows enterprises in regulated industries as well as government agencies and multi-tenant cloud service providers to better secure their environments. Importantly, Confidential Computing means that any organization running applications on the cloud can be sure that any other users of the cloud capacity and even the cloud service providers themselves cannot access the data or applications residing within a memory enclave.


Metasurfaces offer new possibilities for quantum research

Metasurfaces are ultrathin planar optical devices made up of arrays of nanoresonators. Their subwavelength thickness (a few hundred nanometers) renders them effectively two-dimensional. That makes them much easier to handle than traditional bulky optical devices. Even more importantly, due to the lesser thickness, the momentum conservation of the photons is relaxed because the photons have to travel through far less material than with traditional optical devices: according to the uncertainty principle, confinement in space leads to undefined momentum. This allows for multiple nonlinear and quantum processes to happen with comparable efficiencies and opens the door for the usage of many new materials that would not work in traditional optical elements. For this reason, and also because of being compact and more practical to handle than bulky optical elements, metasurfaces are coming into focus as sources of photon pairs for quantum experiments. In addition, metasurfaces could simultaneously transform photons in several degrees of freedom, such as polarization, frequency, and path.


Agile: Starting at the top

Having strong support was key to this change in beliefs among the leadership team. Aisha Mir, IT Agile Operations Director for Thales North America, has a track record of successful agile transformations under her belt and was eager to help the leadership team overcome any initial hurdles. “The best thing I saw out of previous transformations I’ve been a part of was the way that the team started working together and the way they were empowered. I really wanted that for our team,” says Mir. “In those first few sprints, we saw that there were ways for all of us to help each other, and that’s when the rest of the team began believing. I had seen that happen before – where the team really becomes one unit and they see what tasks are in front of them – and they scrum together to finish it.” While the support was essential, one motivating factor helped them work through any challenge in their way: How could they ask other parts of the IT organization to adopt agile methodologies if they couldn’t do it themselves? “When we started, we all had some level of skepticism but were willing to try it because we knew this was going to be the life our organization was going to live,” says Daniel Baldwin


AutoML: The Promise vs. Reality According to Practitioners

The data collection, data tagging, and data wrangling of pre-processing are still tedious, manual processes. There are utilities that provide some time savings and aid in simple feature engineering, but overall, most practitioners do not make use of AutoML as they prepare data. In post-processing, AutoML offerings have some deployment capabilities. But Deployment is famously a problematic interaction between MLOps and DevOps in need of automation. Take for example one of the most common post-processing tasks: generating reports and sharing results. While cloud-hosted AutoML tools are able to auto-generate reports and visualizations, our findings show that users are still adopting manual approaches to modify default reports. The second most common post-processing task is deploying models. Automated deployment was only afforded to users of hosted AutoML tools and limitations still existed for security or end user experience considerations. The failure of AutoML to be end-to-end can actually cut into the efficiency improvements.


Best Practices for Building Serverless Microservices

There are two schools of thoughts when it comes to structuring your repositories for an application: monorepo vs multiple repos. A monorepo is a single repository that has logical separations for distinct services. In other words, all microservices would live in the same repo but would be separated by different folders. Benefits of a monorepo include easier discoverability and governance. Drawbacks include the size of the repository as the application scales, large blast radius if the master branch is broken, and ambiguity of ownership. On the flip side, having a repository per microservice has its ups and downs. Benefits of multiple repos include distinct domain boundaries, clear code ownership, and succinct and minimal repo sizes. Drawbacks include the overhead of creating and maintaining multiple repositories and applying consistent governance rules across all of them. In the case of serverless, I opt for a repository per microservice. It draws clear lines for what the microservice is responsible for and keeps the code lightweight and focused. 


5 Super Fast Ways To Improve Core Web Vitals

High-quality images consume more space. When the image size is big, your loading time will increase. If the loading time increases, the user experience will be affected. So, keeping the image size as small as possible is best. Compress the image size. If you have created your website using WordPress, you can use plugins like ShortPixel to compress the image size. If not, many online sites are available to compress image size. However, you might have a doubt - does compression affect the quality of the image? To some extent, yes, it will damage the quality, but only it will be visible while zooming in on the image. Moreover, use JPEG format for images and SVG format for logos and icons. It is even best if you can use WebP format. ... One of the important metrics of the core web vitals is the Cumulative Layout shift. Imagine that you're scrolling through a website on your phone. You think that it is all set to engage with it. Now, you see a text which has a hyperlink that has grasped your interest, and you're about to click it. When you click it, all of a sudden, the text disappears, and there is an image in the place of the text. 


Cyber-Insurance Firms Limit Payouts, Risk Obsolescence

While the insurers' position is understandable, businesses — which have already seen their premiums skyrocket over the past three years — should question whether insurance still mitigates risk effectively, says Pankaj Goyal, senior vice president of data science and cyber insurance at Safe Security, a cyber-risk analysis firm. "Insurance works on trust, [so answer the question,] 'will an insurance policy keep me whole when a bad event happens?' " he says. "Today, the answer might be 'I don't know.' When customers lose trust, everyone loses, including the insurance companies." ... Indeed, the exclusion will likely result in fewer companies relying on cyber insurance as a way to mitigate catastrophic risk. Instead, companies need to make sure that their cybersecurity controls and measures can mitigate the cost of any catastrophic attack, says David Lindner, chief information security officer at Contrast Security, an application security firm. Creating data redundancies, such as backups, expanding visibility of network events, using a trusted forensics firm, and training all employees in cybersecurity can all help harden a business against cyberattacks and reduce damages.


Data security hinges on clear policies and automated enforcement

The key is to establish policy guardrails for internal use to minimize cyber risk and maximize the value of the data. Once policies are established, the next consideration is establishing continuous oversight. This component is difficult if the aim is to build human oversight teams, because combining people, processes, and technology is cumbersome, expensive, and not 100% reliable. Training people to manually combat all these issues is not only hard but requires a significant investment over time. As a result, organizations are looking to technology to provide long-term, scalable, and automated policies to govern data access and adhere to compliance and regulatory requirements. They are also leveraging these modern software approaches to ensure privacy without forcing analysts or data scientists to “take a number” and wait for IT when they need access to data for a specific project or even everyday business use. With a focus on establishing policies and deciding who gets to see/access what data and how it is used, organizations gain visibility into and control over appropriate data access without the risk of overexposure. 



Quote for the day:

"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome." -- John Donahoe

Daily Tech Digest - August 28, 2022

How to build a winning analytics team

Analytics teams thrive in dynamic environments that reward curiosity, encourage innovation, and set high expectations. Building and reinforcing this type of culture can help put organizations on a path to earning impressive returns from analytics investments. An active analytics culture thrives when CXOs reward curiosity over perfection. Encourage analysts to challenge convention and ask questions as a method to improve quality and reduce risks. This thinking goes hand in hand with a test-and-learn mentality, where pushing boundaries through proactive experimentation helps identify what works, and optimize accordingly. It’s also important to create a culture where failure and success are celebrated equally. Giving airtime to what went wrong allows the team to more effectively learn from their mistakes and see that perfection is an unhealthy pipe dream. This encourages an environment that holds analysts accountable for delivering quality processes and results, further helping to mitigate risk and improve marketing programs.


How SSE Renewables uses Azure Digital Twins for more than machines

This approach will allow SSE to experiment with reducing risks to migrating birds. For example, they can determine an optimum blade speed that will allow flocks to pass safely while still generating power. By understanding the environment around the turbines, it will be possible to control them more effectively and with significantly less environmental impact. Simon Turner, chief technology officer for data and AI at Avanade, described this approach as “an autonomic business.” Here, data and AI work together to deliver a system that is effectively self-operating, one he described as using AI to “look after certain things that you understood that could guide the system to make decisions on your behalf.” Key to this approach is extending the idea of a digital twin with machine learning and large-scale data. ... As Turner notes, this approach can be extended to more than wind farms, using it to model any complex system where adding new elements could have a significant effect, such as understanding how water catchment areas work or how hydroelectric systems can be tuned to let salmon pass unharmed on their way to traditional breeding grounds, while still generating power.


McKinsey report: Two AI trends top 2022 outlook

Roger Roberts, partner at McKinsey and one of the report’s coauthors, said of applied AI, which is defined “quite broadly” in the report, “We see things moving from advanced analytics towards… putting machine learning to work on large-scale datasets in service of solving a persistent problem in a novel way,” he said. That move is reflected in an explosion of publication around AI, not just because AI scientists are publishing more, but because people in a range of domains are using AI in their research and pushing the application of AI forward, he explained. ... According to the McKinsey report, industrializing machine learning (ML) “involves creating an interoperable stack of technical tools for automating ML and scaling up its use so that organizations can realize its full potential.” The report noted that McKinsey expects industrializing ML to spread as more companies seek to use AI for a growing number of applications. “It does encompass MLops, but it extends more fully to include the way to think of the technology stack that supports scaling, which can get down to innovations at the microprocessor level,” said Roberts. 


CISA: Prepare now for quantum computers, not when hackers use them

The main negative implication of this quantum computing concerns the cryptography of secrets, a fundamental element of information security. Cryptographic schemes that are today considered secure will be cracked in mere seconds by quantum computers, leaving persons, companies, and entire countries powerless against the computing supremacy of their adversaries. “When quantum computers reach higher levels of computing power and speed, they will be capable of breaking public key cryptography, threatening the security of business transactions, secure communications, digital signatures, and customer information,” explains CISA. This could threaten data in transit relating to top-secret communications, banking operations, military operations, government meetings, critical industrial processes, and more. Yesterday, China's Baidu introduced “Qian Shi,” an industry-level quantum supercomputer capable of achieving stable performance at 10 quantum bits of power.


How Are Business Intelligence And Data Management Related?

Business intelligence (BI) describes the procedures and tools that assist in getting helpful information and intelligence that can be used from data. A company’s data is accessed by business intelligence tools, which then display analytics and insights as reports, dashboards, graphs, summaries, and charts. Business intelligence has advanced significantly from its theoretical inception in the 1950s, and you must realize that it is not just a tool for big businesses. Most BI providers are tailoring their software to users’ needs because they recognize that our current era is considerably more oriented toward small structures like start-ups. SaaS, or software-as-a-service, vendors are incredibly guilty of this. Another issue is that it’s a more straightforward tool than it once was. It is still a professional tool; managing data is not simple, even with the most powerful technology. Nevertheless, BI has developed into something more accessible than local software, which used to require installation on every computer in the organization and may represent a sizable expenditure with the emergence of the Cloud and SaaS in the early 21st century.


Oxford scientist says greedy physicists have overhyped quantum computing

It’s unclear why Dr. Gourianov would leave big tech out of the argument entirely. There are dozens upon dozens of papers from Google and IBM alone demonstrating breakthrough after breakthrough in the field. Gourianov’s primary argument against quantum computing appears, inexplicably, to be that they won’t be very useful for cracking quantum-resistant encryption. With respect, that’s like saying we shouldn’t develop surgical scalpels because they’re practically useless against chain mail armor. Per Gourianov’s article: Shor’s algorithm has been a godsend to the quantum industry, leading to untold amounts of funding from government security agencies all over the world. However, the commonly forgotten caveat here is that there are many alternative cryptographic schemes that are not vulnerable to quantum computers. It would be far from impossible to simply replace these vulnerable schemes with so-called “quantum-secure” ones. This appears to suggest that Gourianov believes at least some physicists have pulled a bait-and-switch on governments and investors by convincing everyone that we need quantum computers for security.


Computer vision is primed for business value

In healthcare, computer vision is used extensively in diagnostics, such as in AI-powered image and video interpretation. It is also used to monitor patients for safety, and to improve healthcare operations, says Gartner analyst Tuong Nguyen. “The potential for computer vision is enormous,” he says. “It’s basically helping machines make sense of the world. The applications are infinite — really, anything you need to see. The entire world.” According to the fourth annual Optum survey on AI in healthcare, released at the end of 2021, 98% of healthcare organizations either already have an AI strategy or are planning to implement one, and 99% of healthcare leaders believe AI can be trusted for use in health care. Medical image interpretation was one of the top three areas cited by survey respondents where AI can be used to improve patient outcomes. The other two areas, virtual patient care and medical diagnosis, are also ripe for computer vision. Take, for example, idiopathic pulmonary fibrosis, a deadly lung disease that affects hundreds of thousands of people worldwide.


AI Therapy: Digital Solution to Address Mental Health Issues

AI for health has been a long-discussed topic specifically on therapy by bringing digital solutions to mental health issues. Some applications have already been, such as Genie in a Headset which manages human emotional behavior in work environments. But bringing AI into therapy means building an AI that feels and is keen to improve mental health issues. The fundamental objective of AI therapy is to assist patients in fighting mental illnesses. Ideally, this technology would be able to distinguish each patients needs and personalize their mental health programs through an efficient data collection process. ... Psychological therapy is a tough job that requires extracting confidential information from patients they hesitate to share. Like any other medical issue, it is essential to diagnose the problem before curing it. It requires exquisite skills to make someone comfortable. An AI therapist can access your cellphone, laptop, personal data, emails, all-day movement, and routine, making it more efficient in understanding you and your problems. Knowing problems in depth gives an AI-therapist advantage over the usual therapist.


What is the Microsoft Intelligent Data Platform?

The pieces that make up the Microsoft Intelligent Data Platform are services you may already be using because it includes all of Microsoft’s key data services, such as SQL Server 2022, Azure SQL, Cosmos DB, Azure Synapse, Microsoft Purview and more. But you’re probably not using them together as well as you could; the Intelligent Data Platform is here to make that easier. “These are the best-in-class services across what we consider the three core pillars of a data platform,” Mansour explained. According to Mansour, the Microsoft Intelligent Data Platform offers services for databases and operational data store, analytics, and data governance, providing authorized users with insight that will allow them to properly understand, manage and govern their business’s data. “Historically, customers have been thinking about each of those areas independent from one another, and what the Intelligent Data Platform does is bring all these pieces together,” said Mansour. Integrating databases, analytics and governance isn’t new either, but the point of presenting this as a platform is the emphasis on simplifying the experience of working with it. 


Threatening clouds: How can enterprises protect their public cloud data?

Public clouds don’t inherently impose security threats, said Gartner VP analyst Patrick Hevesi — in fact, hyperscale cloud providers usually have more security layers, people and processes in place than most organizations can afford in their own data centers. However, the biggest red flag for organizations when selecting a public cloud provider is the lack of visibility into their security measures, he said. Some of the biggest issues in recent memory: Misconfigurations of cloud storage buckets, said Hevesi. This has opened files up for data exfiltration. Some cloud providers have also had outages due to misconfigurations of identity platforms. This has affected their cloud services from starting up properly, which in turn affected tenants. Smaller cloud providers, meanwhile, have been taken offline due to distributed denial-of-service (DDoS) attacks. This is when perpetrators make a machine or network resource unavailable to intended users by disrupting services — either short-term or long-term — of a host connected to a network.



Quote for the day:

“Real integrity is doing the right thing, knowing that nobody’s going to know whether you did it or not.” -- Oprah Winfrey

Daily Tech Digest - August 27, 2022

Intel Hopes To Accelerate Data Center & Edge With A Slew Of Chips

McVeigh noted that Intel’s integrated accelerators will be complemented by the upcoming discrete GPUs. He called the Flex Series GPUs “HPC on the edge,” with their low power envelopes, and pointed to Ponte Vecchio – complete with 100 billion transistors in 47 chiplets that leverage both Intel 7 manufacturing processes as well as 5 nanometer and 7 nanometer processes from Taiwan Semiconductor Manufacturing Co – and then Rialto Bridge. Both Ponte Vecchio and Sapphire Rapids will be key components in Argonne National Labs’ Aurora exascale supercomputer, which is due to power on later this year and will deliver more than 2 exaflops of peak performance. .... “Another part of the value of the brand here is around the software unification across Xeon, where we leverage the massive amount of capabilities that are already established through decades throughout that ecosystem and bring that forward onto our GPU rapidly with oneAPI, really allowing for both the sharing of workloads across CPU and GPU effectively and to ramp the codes onto the GPU faster than if we were starting from scratch,” he said.


Performance isolation in a multi-tenant database environment

Our multi-tenant Postgres instances operate on bare metal servers in non-containerized environments. Each backend application service is considered a single tenant, where they may use one of multiple Postgres roles. Due to each cluster serving multiple tenants, all tenants share and contend for available system resources such as CPU time, memory, disk IO on each cluster machine, as well as finite database resources such as server-side Postgres connections and table locks. Each tenant has a unique workload that varies in system level resource consumption, making it impossible to enforce throttling using a global value. This has become problematic in production affecting neighboring tenants:Throughput. A tenant may issue a burst of transactions, starving shared resources from other tenants and degrading their performance. Latency: A single tenant may issue very long or expensive queries, often concurrently, such as large table scans for ETL extraction or queries with lengthy table locks. Both of these scenarios can result in degraded query execution for neighboring tenants. Their transactions may hang or take significantly longer to execute due to either reduced CPU share time, or slower disk IO operations due to many seeks from misbehaving tenant(s).


Quantum Encryption Is No More A Sci-Fi! Real-World Consequences Await

Quantum will enable enterprise customers to perform complex simulations in significantly less time than traditional software using quantum computers. Quantum algorithms are very challenging to develop, implement, and test on current Quantum computers. Quantum techniques also are being used to improve the randomness of computer-based random number generators. The world’s leading quantum scientists in the field of quantum information engineering, working to turn what was once in the realm of science fiction. Businesses need to deploy next-generation data security solutions with equally powerful protection based on the laws of quantum physics, literally fighting quantum computers with quantum encryption Quantum computers today are no longer considered to be science fiction. The main difference is that quantum encryption uses quantum bits or qubits comprised of optical photons compared to electrical binary digits or bits. Qubits can also be inextricably linked together using a phenomenon called quantum entanglement.


What Is The Difference Between Computer Vision & Image processing?

We are constantly exposed to and engaged with various visually similar objects around us. By using machine learning techniques, the discipline of AI known as computer vision enables machines to see, comprehend, and interpret the visual environment around us. It uses machine learning approaches to extract useful information from digital photos, movies, or other observable inputs by identifying patterns. Although they have the same appearance and sensation, they differ in a few ways. Computer vision aims to distinguish between, classify, and arrange images according to their distinguishing characteristics, such as size, color, etc. This is similar to how people perceive and interpret images. ... Digital image processing uses a digital computer to process digital and optical images. A computer views an image as a two-dimensional signal composed of pixels arranged in rows and columns. A digital image comprises a finite number of elements, each located in a specific place with a particular value. These so-called elements are also known as pixels, visual, and image elements.


Lessons in mismanagement

In the decades since the movie’s release, the world has become a different place in some important ways. Women are now everywhere in the world of business, which has changed irrevocably as a result. Unemployment is quite low in the United States and, by Continental standards, in Europe. Recent downturns have been greeted by large-scale stimuli from central banks, which have blunted the impact of stock market slides and even a pandemic. But it would be foolish to think that the horrendous managers and desperate salesmen of Glengarry Glen Ross exist only as historical artifacts. Mismanagement and desperation go hand in hand and are most apparent during hard times, which always come around sooner or later. By immersing us in the commercial and workplace culture of the past, movies such as Glengarry can help us understand our own business culture. But they can also help prepare us for hard times to come—and remind us how not to manage, no matter what the circumstances. ... Everyone, in every organization, has to perform. 


How the energy sector can mitigate rising cyber threats

As energy sector organisations continue expanding their connectivity to improve efficiency, they must ensure that the perimeters of their security processes keep up. Without properly secured infrastructure, no digital transformation will ever be successful, and not only internal operations, but also the data of energy users are bound to become vulnerable. But by following the above recommendations, energy companies can go a long way in keeping their infrastructure protected in the long run. This endeavour can be strengthened further by partnering with cyber security specialists like Dragos, which provides an all-in-one platform that enables real-time visualisation, protection and response against ever present threats to the organisation. These capabilities, combined with threat intelligence insights and supporting services across the industrial control system (ICS) journey, is sure to provide peace of mind and added confidence in the organisation’s security strategy. For more information on Dragos’s research around cyber threat activity targeting the European energy sector, download the Dragos European Industrial Infrastructure Cyber Threat Perspective report, here.


How to hire (and retain) Gen Z talent

The global pandemic has forever changed the way we work. The remote work model has been successful, and we’ve learned that productivity does not necessarily decrease when managers and their team members are not physically together. This has been a boon for Gen Z – a generation that grew up surrounded by technology. Creating an environment that gives IT employees the flexibility to conduct their work remotely has opened the door to a truly global workforce. Combined with the advances in digital technologies, we’ve seen a rapid and seamless transition in how employment is viewed. Digital transformation has leveled the playing field for many companies by changing requirements around where employees need to work. Innovative new technologies, from videoconferencing to IoT, have shifted the focus from an employee’s location to their ability. Because accessing information and managing vast computer networks can be done remotely, the location of workers has become a minor issue.


'Sliver' Emerges as Cobalt Strike Alternative for Malicious C2

Enterprise security teams, which over the years have honed their ability to detect the use of Cobalt Strike by adversaries, may also want to keep an eye out for "Sliver." It's an open source command-and-control (C2) framework that adversaries have increasingly begun integrating into their attack chains. "What we think is driving the trend is increased knowledge of Sliver within offensive security communities, coupled with the massive focus on Cobalt Strike [by defenders]," says Josh Hopkins, research lead at Team Cymru. "Defenders are now having more and more successes in detecting and mitigating against Cobalt Strike. So, the transition away from Cobalt Strike to frameworks like Sliver is to be expected," he says. Security researchers from Microsoft this week warned about observing nation-state actors, ransomware and extortion groups, and other threat actors using Sliver along with — or often as a replacement for — Cobalt Strike in various campaigns. Among them is DEV-0237, a financially motivated threat actor associated with the Ryuk, Conti, and Hive ransomware families; and several groups engaged in human-operated ransomware attacks, Microsoft said.


Data Management in the Era of Data Intensity

When your data is spread across multiple clouds and systems, it can introduce latency, performance, and quality problems. And bringing together data from different silos and getting those data sets to speak the same language is a time- and budget-intensive endeavor. Your existing data platforms also may prevent you from managing hybrid data processing, which, as Ventana Research explains, “enable[s] analysis of data in an operational data platform without impacting operational application performance or requiring data to be extracted to an external analytic data platform.” The firm adds that: “Hybrid data processing functionality is becoming increasingly attractive to aid the development of intelligent applications infused with personalization and artificial intelligence-driven recommendations.” Such applications are clearly important because they can be key business differentiators and enable you to disrupt a sector. However, if you are grappling with siloed systems and data and legacy technology that is unable to ingest high volumes of complex data fast so that you can act in the moment, you may believe that it is impossible for your business to benefit from the data synergies that you and your customers might otherwise enjoy.


How to Achieve Data Quality in the Cloud

Everybody knows data quality is essential. Most companies spend significant money and resources trying to improve data quality. However, despite these investments, companies lose money yearly because of insufficient data, ranging from $9.7 million to $14.2 million annually. Traditional data quality programs do not work well for identifying data errors in cloud environments because:Most organizations only look at the data risks they know, which is likely only the tip of an iceberg. Usually, data quality programs focus on completeness, integrity, duplicates and range checks. However, these checks only represent 30 to 40 percent of all data risks. Many data quality teams do not check for data drift, anomalies or inconsistencies across sources, contributing to over 50 percent of data risks. The number of data sources, processes and applications has exploded because of the rapid adoption of cloud technology, big data applications and analytics. These data assets and processes require careful data quality control to prevent errors in downstream processes. The data engineering team can add hundreds of new data assets to the system in a short period. 



Quote for the day:

"Problem-solving leaders have one thing in common: a faith that there's always a better way." -- Gerald M. Weinberg

Daily Tech Digest - August 26, 2022

CISA: Just-Disclosed Palo Alto Networks Firewall Bug Under Active Exploit

Bud Broomhead, CEO at Viakoo, says bugs that can be marshaled into service to support DDoS attacks are in more and more demand by cybercriminals -- and are increasingly exploited. "The ability to use a Palo Alto Networks firewall to perform reflected and amplified attacks is part of an overall trend to use amplification to create massive DDoS attacks," he says. "Google's recent announcement of an attack which peaked at 46 million requests per second, and other record-breaking DDoS attacks will put more focus on systems that can be exploited to enable that level of amplification." The speed of weaponization also fits the trend of cyberattackers taking increasingly less time to put newly disclosed vulnerabilities to work — but this also points to an increased interest in lesser-severity bugs on the part of threat actors. "Too often, our researchers see organizations move to patch the highest-severity vulnerabilities first based on the CVSS," Terry Olaes, director of sales engineering at Skybox Security, wrote in an emailed statement. 


Kestrel: The Microsoft web server you should be using

Kestrel is an interesting option for anyone building .NET web applications. It’s a relatively lightweight server compared to IIS, and as it’s cross-platform, it simplifies how you might choose a hosting platform. It's also suitable as a development tool, running on desktop hardware for tests and experimentation. There’s support for HTTPS, HTTP/2, and a preview release of QUIC, so your code is future-proof and will run securely. The server installs as part of ASP.NET Core and is the default for sites that aren’t explicitly hosted by IIS. You don’t need to write any code to launch Kestrel, beyond using the familiar WebApplication.CreateBuilder method. Microsoft has designed Kestrel to operate with minimal configuration, either using a settings file that’s created when you use dotnet new to set up an app scaffolding or when you create a new app in Visual Studio. Apps are able to configure Kestrel using the APIs in WebApplication and WebApplicationBuilder, for example, adding additional ports. As Kestrel doesn’t run until your ASP.NET Core code runs, this is a relatively easy way to make server configuration dynamic, with any change simply requiring a few lines of code. 


Private 5G networks bring benefits to IoT and edge

Private 5G's potential in enterprise use cases that involve IoT and edge computing is not without challenges that the industry must address; a production-level system requires many touchpoints. Private 5G networks must be planned, deployed, verified and managed by service providers, system integrators and IT teams. Edge computing is a combination of hardware and software. Each of these elements can fail, so they must be maintained and upgraded practically without any downtime, especially for real-time, mission-critical applications. Admins must manage edge deployments with containers or VM orchestration. Both public cloud vendors and managed open source vendors are addressing this space by providing a virtual edge computing framework for application developers. Public cloud vendors have also started to provide out-of-the-box edge infrastructure that runs the same software tools that run on their public cloud, which can make it easier for developers. For private 5G, IoT and edge to be successful, the industry must develop an extensive roadmap. Many of these solutions require long-term maintenance and upgrades.


Google is exiting the IoT services business. Microsoft is doing the opposite

Google will be shuttering its IoT Core service; the company disclosed last week. Its stated reason: Partners can better manage customers' IoT services and devices. While Microsoft also is relying heavily on partners as part of its IoT and edge-computing strategies, it is continuing to build up its stable of IoT services and more tightly integrate them with Azure. CEO Satya Nadella's "intelligent cloud/intelligent edge" pitch is morphing into more of an intelligent end-to-end distributed-computing play. ... Among Microsoft's current IoT offerings: Azure IoT Hub, a service for connecting, monitoring and managing IoT assets; Azure Digital Twins, which uses "spatial intelligence" to model physical environments; Azure IoT Edge, which brings analytics to edge-computing devices; Azure IoT Central; Windows for IoT, which enables users to build edge solutions using Microsoft tools. On the IoT OS front, Microsoft has Azure RTOS, its real-time IoT platform; Azure Sphere, its Linux-based microcontroller OS platform and services; Windows 11 IoT Enterprise and Windows 10 IoT Core -- a legacy IoT OS platform which Microsoft still supports but which hasn't been updated substantially since 2018.


Twitter's Ex-Security Chief Files Whistleblower Complaint

Zatko's complaint alleges that numerous security problems remained unresolved when he left. It also alleges that Twitter had been "penetrated by foreign intelligence agents," including Indian government agents as well as another, unnamed foreign intelligence agency. A federal jury recently found a former Twitter employee guilty of acting as an unregistered agent for Saudi Arabia while at the company. In his February final report to Twitter, Zatko alleged that "inaccurate and misleading" information concerning "Twitter's information security posture" had been transmitted to the company's risk committee, which risked the company making inaccurate reports to regulators, including the FTC. According to his report, the risk committee had been told that "nearly all Twitter endpoints (laptops) have security software installed." But he said the report failed to mention that of about 10,000 systems, 40% were not in compliance with "basic security settings," and 30% "do not have automatic updates enabled."


Announcing built-in container support for the .NET SDK

Containers are an excellent way to bundle and ship applications. A popular way to build container images is through a Dockerfile – a special file that describes how to create and configure a container image. ... This Dockerfile works very well, but there are a few caveats to it that aren’t immediately apparent, which arise from the concept of a Docker build context. The build context is a the set of files that are accessible inside of a Dockerfile, and is often (though not always) the same directory as the Dockerfile. If you have a Dockerfile located beside your project file, but your project file is underneath a solution root, it’s very easy for your Docker build context to not include configuration files like Directory.Packages.props or NuGet.config that would be included in a regular dotnet build. You would have this same situation with any hierarchical configuration model, like EditorConfig or repository-local git configurations. This mismatch between the explicitly-defined Docker build context and the .NET build process was one of the driving motivators for this feature. 


The Quantum Computing Threat: Risks and Responses

Asymmetric cryptographic systems are most at risk, implying that today’s public key infrastructure that form the basis of almost all of our security infrastructure would be compromised. That being said, the level of risk may be different depending on the data to be protected – for instance, a life insurance policy that will be valid for many years to come; a smart city that is built for our next generation. Similarly, the financial system, both centralized and decentralized, may have different vulnerabilities. For this reason, post-quantum security should be addressed as part of an organization’s overall cybersecurity strategy. It is of such importance that both the C-suite and the board should pay attention. While blockchain-based infrastructures are still considered safe, being largely hash-based, transactions are digitally signed using traditional encryption technologies such as elliptic curve and therefore could be quantum-vulnerable at the end points. Blockchain with quantum-safe features will no doubt gain more traction as NFTs, metaverse and crypto-assets continue to mature.


‘Post-Quantum’ Cryptography Scheme Is Cracked on a Laptop

It’s impossible to guarantee that a system is unconditionally secure. Instead, cryptographers rely on enough time passing and enough people trying to break the problem to feel confident. “That does not mean that you won’t wake up tomorrow and find that somebody has found a new algorithm to do it,” said Jeffrey Hoffstein, a mathematician at Brown University. Hence why competitions like NIST’s are so important. In the previous round of the NIST competition, Ward Beullens, a cryptographer at IBM, devised an attack that broke a scheme called Rainbow in a weekend. Like Castryck and Decru, he was only able to stage his attack after he viewed the underlying mathematical problem from a different angle. And like the attack on SIDH, this one broke a system that relied on different mathematics than most proposed post-quantum protocols. “The recent attacks were a watershed moment,” said Thomas Prest, a cryptographer at the startup PQShield. They highlight how difficult post-quantum cryptography is, and how much analysis might be needed to study the security of various systems.


Intel Adds New Circuit to Chips to Ward Off Motherboard Exploits

Under normal operations, once the microcontrollers activate, the security engine loads its firmware. In this motherboard hack, attackers attempt to trigger an error condition by lowering the voltage. The resulting glitch gives attackers the opportunity to load malicious firmware, which provides full access to information such as biometric data stored in trusted platform module circuits. The tunable replica circuit protects systems against such attacks. Nemiroff describes the circuit as a countermeasure to prevent the hardware attack by matching the time and corresponding voltage at which circuits on a motherboard are activated. If the values don't match, the circuit detects an attack and generates an error, which will cause the chip's security layer to activate a failsafe and go through a reset. "The only reason that could be different is because someone had slowed down the data line so much that it was an attack," Nemiroff says. Such attacks are challenging to execute because attackers need to get access to the motherboard and attach components, such as voltage regulators, to execute the hack.


Why Migrating a Database to the Cloud is Like a Heart Transplant

Your migration project’s enemies are surprises. There are numerous differences between databases from number conversions to date/time handling, to language interfaces, to missing constructs, to rollback behavior, and many others. Proper planning will look at all the technical differences and plan for them. Database migration projects also require time and effort, according to Ramakrishnan, and if they are rushed the results will not be what anyone wants. He recommended that project leaders create a single-page cheat sheet to break down the scope and complexity of the migration to help energize the team. It should include the project’s goals, the number of users impacted, the reports that will be affected by the change, the number of apps it touches, and more. Before embarking on the project, organizations should ask the following question: “How much will it cost to recoup the investment in the new database migration?” Organizations need to check that the economics are sound, and that means also analyzing the opportunity cost for not completing the migration.



Quote for the day:

"Do not follow where the path may lead. Go instead where there is no path and leave a trail." -- Muriel Strode

Daily Tech Digest - August 24, 2022

3 reasons cloud computing doesn’t save money

Without cloud spending visibility and insights, you’re basically driving a car without a dashboard. You don’t how fast you’re going or when you’re about to run out of gas. A guessing game turns into a big surprise when cloud spending is way above what everyone initially thought. That sucking sound you hear is the value that you thought cloud computing would bring now leaving the business. Second, there is no discipline or accountability. A lack of cloud cost monitoring means we can’t see what we’re spending. The other side of this coin is a lack of accountability. Even when a business monitors cloud spending, that data is useless if everyone knows there are no penalties. Why should people change their behavior? They need known incentives to conserve cloud computing resources as well as known consequences. Accountability problems can usually be corrected by leadership making some unpopular decisions. Trust me, you’ll either deal with accountability now or wait until later when it becomes much harder to fix.


How attackers use and abuse Microsoft MFA

The legitimate owner of a thusly compromised account is unlikely to spot that the second MFA app has been added. “It is only obvious if one specifically looks for it. If one goes to the M365 security portal, they will see it; but most users never go to that place. It is where you can change your password without being prompted for it, or change an authenticator app. In day-to-day use, people only change passwords when mandated through the prompt, or when they change their phone and want to move their authenticator app,” Mitiga CTO Ofer Maor told Help Net Security. Also, an isolated, random prompt for the second authentication factor triggered by the attacker can easily not be seen or ignored by the legitimate account owner. “They get prompted, but once the attacker authenticates on the other authenticator, that prompt disappears. There is no popup or anything that says ‘this request has been approved by another device’ (or something of that sort) to alert the user of the risk. ... ” Maor noted.


The emergence of the chief automation officer

AI and automation can transform IT and business processes to help improve efficiencies, save costs and enable people — employees — to focus on higher-value work. Two of the most important areas of IT operations in the enterprise are issue avoidance and issue resolution because of the massive impact they have on cost, productivity, and brand reputation. The rapid digital expansion among enterprises has led to an immediate uptick in demand from IT leaders to embrace AIops tools to increase workflow productivity and ensure proactive, continuous application performance. With AIops, IT systems and applications are more reliable, and complex work environments can be managed more proactively, potentially saving hundreds of thousands of dollars. This can enable IT staff to focus on high-value work instead of laborious, time-consuming tasks, and identify potential issues before they become major problems.


How a Service Mesh Simplifies Microservice Observability

According to Jay Livens, observability is the practice to capture the system’s current state based on the metrics and logs it generates. It’s a system that helps us with monitoring the health of our application, generating alerts on failure conditions, and capturing enough information to debug issues whenever they happen. ... A major aspect of observability is capturing network telemetry, and having good network insights can help us solve a lot of the problems we spoke about initially. Normally, the task of generating this telemetry data is up to the developers to implement. This is an extremely tedious and error-prone process that doesn’t really end at telemetry. Developers are also tasked with implementing security features and making communication resilient to failures. Ideally, we want our developers to write application code and nothing else. The complications of microservices networking need to be pushed down to the underlying platform. A better way to achieve this decoupling would be to use a service mesh like Istio, Linkerd, or Consul Connect.


IT talent: 4 interview questions to prep for

Whether managers have a more hands-on approach or allow their direct reports more autonomy, identifying this during the interview process is in the best interest of both parties. Additionally, some candidates thrive in an office, while others are hoping for a completely remote position or even a hybrid option. Discussing and defining preferences and working environments helps clarify candidates’ expectations for their roles. It also benefits hiring managers, prospective employees, and the companies, which can avoid high turnover rates by being transparent in their recruiting phase. ... people generally love to talk about things that make them proud. By asking this question, hiring managers allow candidates to talk about who they are as individuals rather than just what they bring to the larger business. Obviously, pride can encompass past work projects, but some candidates might also cite volunteer contributions, family achievements, or other accomplishments. Overall, candidates should always be prepared to discuss experiences that have contributed to their growth. 


Beyond purpose statements

Many CEOs are starting to sound like politicians, throwing around lofty language that is vague and hard to pin down. And therein lies the problem, or certainly the challenge: to remain credible and trustworthy, leaders need to shift the conversation from fuzzy purpose bromides to more tangible and concrete statements about the impact their companies are having on society. That is not simply a matter of semantics, as there is a world of difference between purpose and impact. It is difficult to challenge a purpose. If a company says its reason for existing in some form or fashion is to try to make the world a better place, how can you pressure-test that claim? If that company is providing goods or services that customers are willing to pay for, and it employs people and pays vendors, then, ipso facto, it is doing something that has a perceived value. As long as it’s not doing anything criminal or unethical, it’s working “to promote the good of the people,” to borrow the language from one organization’s mission statement. But if you are claiming that you are making an impact, then you need proof. And that’s what makes a statement powerful.


Managing Expectations: Explainable A.I. and its Military Implications

AI systems can be purposefully programmed to cause death or destruction, either by the users themselves or through an attack on the system by an adversary. Unintended harm can also result from inevitable margins of error which can exist or occur even after rigorous testing and proofing of the AI system according to applicable guidelines. Indeed, even ‘regular’ operations of deployed AI systems are mired with faults that are only discoverable at the output stage. ... A primary cause for such faults is flawed training datasets and commands, which can result in misrepresentation of critical information as well as unintended biases. Another, and perhaps far more challenging, reason is issues with algorithms within the system which are undetectable and inexplicable to the user. As a result, AI has been known to produce outputs based on spurious correlations and information processing that does not follow the expected rules, similar to what is referred to in psychology as the ‘Clever Hans effect’.


POCs, Scrum, and the Poor Quality of Software Solutions

It is generally accepted that quality is the ‘reliability of a product’. ‘Reliability’ though, as we are used to think of in classical science, is the attribute of consistently getting the same results under the same conditions. In this classical view, building a Quality solution means that we should build a product that never fails. Ironically, understanding reliability this way harms Quality instead of achieving it. Aiming to build a product that never fails can only result in extremely complex systems that are hard to maintain causing Quality to degrade over time. The issue with reliability in this classical sense is the false assumption that we control all conditions, while in fact we don’t (hardware failure, network latency, external service throttling…etc.). We need to extend the meaning of reliability to also accommodate for cases when the conditions are not aligned: Quality is not only a measure of how reliable a software product is when it is up & running, but also a measure of how reliable it is when it fails. 


Critical infrastructure is under attack from hackers. Securing it needs to be a priority - before it's too late

In order to protect networks – and people – from the consequences of attacks, which could be significant, many of the required security measures are among the most commonly recommended and often simplest practices. ... Cybersecurity can become more complex for critical infrastructure, particularly when dealing with older systems, which is why it's vital that those running them know their own network, what's connected to it and who has access. Taking all of this into account, providing access only when necessary can keep networks locked down. In some cases, that might mean ensuring older systems aren't connected to the outside internet at all, but rather on a separate, air-gapped network, preferably offline. It might make some processes more inconvenient to manage, but it's better than the alternative should a network be breached. Incidents like the South Staffordshire Water attack and the Florida water incident show that cyber criminals are targeting critical infrastructure more and more. Action needs to be taken sooner rather than later to prevent potentially disastrous consequences not just for organizations, but for people too.


How to Nurture Talent and Protect Your IT Stars

Anderson adds building out growth and learning opportunities starts with the CTO. “That means ensuring we have learning and training goals identified, which is used as a critical element for annual performance expectations of our IT leaders and managers, not only for themselves, but for their staff,” he says. As Court notes, the company invests internally through the LIFT University with a cadre of continuing education and augmenting with external training. “For career growth, I recommend IT teams have a close reporting or partnership to the engineering and product teams,” Anderson adds. He says the rationale for this is simple -- as employees want to perfect their craft, they need to work for and with people that understand their craft, and push them to continually learn through team, project, and program collaboration. “As we all know, the one constant is that technology is constantly evolving, so continuous learning for employees, especially our IT team, is a must,” he says. SoftServe’s Semenyshyn says that closely monitoring employee burnout is a priority across the IT industry, pointing out the advantage of the IT business in a large global company is the possibility of rotations.



Quote for the day:

"Teamwork is the secret that make common people achieve uncommon result." -- Ifeanyi Enoch Onuoha

Daily Tech Digest - August 23, 2022

Unstructured data storage – on-prem vs cloud vs hybrid

Enterprises have responded to growing storage demands by moving to larger, scale-out NAS systems. The on-premise market here is well served, with suppliers Dell EMC, NetApp, Hitachi, HPE and IBM all offering large-capacity NAS technology with different combinations of cost and performance. Generally, applications that require low latency – media streaming or, more recently, training AI systems – are well served by flash-based NAS hardware from the traditional suppliers. But for very large datasets, and the need to ease movement between on-premise and cloud systems, suppliers are now offering local versions of object storage. The large cloud “superscalers” even offer on-premise, object-based technology so that firms can take advantage of object’s global namespace and data protection features, with the security and performance benefits of local storage. However, as SNIA warns, these systems typically lack interoperability between suppliers. The main benefits of on-premise storage for unstructured data are performance, security, plus compliance and control – firms know their storage architecture, and can manage it in a granular way.


What is CXL, and why should you care?

Eventually CXL it is expected to be an all-encompassing cache-coherent interface for connecting any number of CPUs, memory, process accelerators (notably FPGAs and GPUs), and other peripherals. The CXL 3.0 spec, announced last week at the Flash Memory Summit (FMS), takes that disaggregation even further by allowing other parts of the architecture—processors, storage, networking, and other accelerators—to be pooled and addressed dynamically by multiple hosts and accelerators just like the memory in 2.0. The 3.0 spec also provides for direct peer-to-peer communications over a switch or even across switch fabric, so two GPUs could theoretically talk to one another without using the network or getting the host CPU and memory involved. Kurt Lender, co-chair of the CXL marketing work group and a senior ecosystem manager at Intel, said, “It’s going to be basically everywhere. It’s not just IT guys who are embracing it. Everyone’s embracing it. So this is going to become a standard feature in every new server in the next few years.” So how will the application run in enterprise data centers benefit? 


Technology alone won’t solve your organizational challenges

Whatever your organization’s preference for team building, it should be carefully selected from a range of options, and it should be clear to everyone why the firm chose one particular structure over another and what’s expected of everyone participating. Start with desired outcomes and cultural norms, then articulate principles to empower action, and, finally, provide the skills and tools needed for success. ... Even in the most forward-thinking organizations, people want to know what a meeting is supposed to achieve, what their role is in that meeting, and if gathering people around a table or their screens is the most effective and efficient way to get to the desired outcome. Is there a decision to be made? Or is the purpose information sharing? Have people been given the chance to opt out if the above points are not clear? Asking these questions can serve as a rapid diagnostic for what you are getting right—and wrong—in your meetings. Poorly run meetings sap energy and breed mediocrity.


For developers, too many meetings, too little 'focus' time

That’s not to say that meetings aren’t important, but it makes sense for managers to find the right balance for their teams, said Dan Kador, vice president of engineering at Clockwise. “It's something that companies have to pay attention to and try to understand their meeting culture — what's working and what's not working for them." “It is important that teams get together to discuss things and make sure they are all on the same page, but often meetings are scheduled at regular intervals even if they aren’t necessary,” said Jack Gold Principal analyst & founder at J. Gold Associates. “We are all subjected to weekly meetings, or other intervals, where, even if there is nothing to discuss, the meeting takes place anyway. And some meeting organizers feel obligated to use up the entire scheduled time.” Of course, meeting overload is not just an issue for those writing code. “Too much time spent in meetings is not just a problem for developers,” said Gold. “It is a problem across the board for employees in many companies.”


How To Remain Compliant In The New Era Of Payment Security

To counter the threat of e-commerce skimming, the card companies are using the two tools they have in their arsenal again: by making stolen data worthless and by creating new technical security standards. To make stolen payment card data worthless, there’s a chip-equivalent technology for e-commerce called 3-D-Secure v2, which has already been rolled out in the EU. This technology requires something more than just the knowledge of the numbers printed on a payment card to make an online transaction. After entering their payment card data, the consumer may have to further confirm a purchase using a bank’s smartphone app or by entering a code received by SMS. Alongside this re-engineering of the payment system, the latest version of the Payment Card Industry Data Security Standard (PCI DSS) includes new technical requirements to prevent and detect e-commerce skimming attacks. PCI DSS applies to all entities involved in the payment ecosystem, including retailers, payment processors and financial institutions. Firstly, website operators will need to maintain an inventory of all the scripts included in their website and determine why the script is necessary.


Q&A: How Data Science Fits into the Cloud Spend Equation

The great thing about cloud is you use it when you need it. Obviously, you pay for using it when you need it but often times data science applications, especially ones you’re running over large datasets, aren’t running continuously or don’t need to be structured in a way that they run continuously. Therefore, you’re talking about a very concentrated amount of spend for a very short amount of time. Buying hardware to do that means your hardware sits idle unless you are very active about making sure you’re being very efficient in the utilization of that resource over time. One of the biggest advantages of cloud is that it runs and scales as you need it to. So even a tiny can run a massive computation and run it when they need to and not consistently. That adds challenges, of course. “I fired this thing off on Friday, I come back in on Monday and it’s still running, and I accidentally spent $6,000 this weekend. Oops.” That happens all the time and so much of that is figuring out how to establish guardrails. Sometimes data science gets treated like, “You know, they’re going to do whatever they need to.”


Advantages of open source software compared to paid equivalents

The strength of open source technology is the fact that these products are developed with an iterative approach by a large group of experts. Open source communities are made up of diverse sets of people from across the world. This kind of diversity is beneficial because ideas and issues get vetted in multiple ways. From an enterprise perspective, open source software is a safe investment because you know there is a dedicated community with product experience. Many developers aren’t working for money, and are easy to approach and ask for help. You can raise questions or concerns directly with developers, or opt to obtain a paid support plan through the community for highly technical inquiries. ... Of course, since open source products are designed for a large audience, sometimes they won’t be able to perfectly fit a company’s needs. Fortunately, the open source approach encourages customisation and integration, meaning your own internally teams can start with an open source baseline and tweak it. Improvements can also be fed back into the open source development cycle.


3 steps for CIOs to build a sustainable business

Data is key. To establish a baseline, the CIO must measure the impact of the enterprise’s full technology stack, including outside partners and providers. This requires asking for, extracting, and reconciling data across external parties – and remembering to aggregate more than just decarbonization data. Cloud and sourcing choices and the disposition of assets after a cloud migration contribute to the carbon footprint. The CIO must also guide employees to make good sustainability choices. One example: according to Cisco, there are 27.1 billion devices connected to the internet – that’s more than three devices for every person on the planet. Many enterprise employees carry two mobile phones but don’t need to – existing technology enables them to segment two different environments on one device. Also, organizations with service contacts can reject hardware refreshes from a contract, empowering employees to decide if they need a new device or just a new battery.


Architecture and Governance in a Hybrid Work Environment

Architects can’t architect if they don’t speak to other people. Likewise governance isn’t effective if you are talking best practice to yourself alone in a dark room someplace. Getting this right in normal times isn’t always easy. People have meetings, they are working hard and don’t want to be disturbed, they need their coffee from the corporate cafeteria or the Starbucks down the street, they’re at lunch or they’re leaving at 430 to get to their kid’s baseball game. In short, it isn’t always possible in normal times to round people up and have a day-long whiteboard session on architecture. With hybrid working models, it is even more difficult because we can’t simply walk over to the cube next to us and have a conversation. In fact, most of the time we have no idea where people actually are or what they’re doing. We rely on text, chat, Teams, Outlook and other tools to give us a sense of whether someone has 5 minutes to chat. If you want a 3 hour whiteboard session, that involves a high degree of coordination with people’s calendars in Outlook. Even then people always seem to have ‘hard stops’ at times that are really incompatible with thinking and design sessions.


Karma Calling: LockBit Disrupted After Leaking Entrust Files

Given the damage and disruption being caused by LockBit and other ransomware groups, one obvious question is why these gangs aren't being disrupted with greater frequency, says Allan Liska, principal intelligence analyst at Recorded Future. "We all know these sites are MacGyvered together with bailing wire and toothpicks and are rickety as hell. We should do stuff like this to impose cost on them," Liska says. Some members of the information security community prefer stronger measures, of the "Aliens" protagonist Ripley variety. "I always say: go kinetic and solve the problem permanently," says Ian Thornton-Trump, CISO of Cyjax. "Attribution is for the lawyers. I recommend a strike from orbit, it's the only way to be sure," he says. Another explanation for the attack would be one or more governments opting to "impose costs" on the ransomware gang, say Brett Callow, a threat analyst at Emsisoft. As he notes, the imposing-costs phrase is a direct quote from Gen. Paul M. Nakasone, the head of Cyber Command, who last year told The New York Times that the military has been tasked with not just helping law enforcement track ransomware groups, but also to disrupt them.



Quote for the day:

"The manager has a short-range view; the leader has a long-range perspective." -- Warren G. Bennis