Daily Tech Digest - August 28, 2022

How to build a winning analytics team

Analytics teams thrive in dynamic environments that reward curiosity, encourage innovation, and set high expectations. Building and reinforcing this type of culture can help put organizations on a path to earning impressive returns from analytics investments. An active analytics culture thrives when CXOs reward curiosity over perfection. Encourage analysts to challenge convention and ask questions as a method to improve quality and reduce risks. This thinking goes hand in hand with a test-and-learn mentality, where pushing boundaries through proactive experimentation helps identify what works, and optimize accordingly. It’s also important to create a culture where failure and success are celebrated equally. Giving airtime to what went wrong allows the team to more effectively learn from their mistakes and see that perfection is an unhealthy pipe dream. This encourages an environment that holds analysts accountable for delivering quality processes and results, further helping to mitigate risk and improve marketing programs.


How SSE Renewables uses Azure Digital Twins for more than machines

This approach will allow SSE to experiment with reducing risks to migrating birds. For example, they can determine an optimum blade speed that will allow flocks to pass safely while still generating power. By understanding the environment around the turbines, it will be possible to control them more effectively and with significantly less environmental impact. Simon Turner, chief technology officer for data and AI at Avanade, described this approach as “an autonomic business.” Here, data and AI work together to deliver a system that is effectively self-operating, one he described as using AI to “look after certain things that you understood that could guide the system to make decisions on your behalf.” Key to this approach is extending the idea of a digital twin with machine learning and large-scale data. ... As Turner notes, this approach can be extended to more than wind farms, using it to model any complex system where adding new elements could have a significant effect, such as understanding how water catchment areas work or how hydroelectric systems can be tuned to let salmon pass unharmed on their way to traditional breeding grounds, while still generating power.


McKinsey report: Two AI trends top 2022 outlook

Roger Roberts, partner at McKinsey and one of the report’s coauthors, said of applied AI, which is defined “quite broadly” in the report, “We see things moving from advanced analytics towards… putting machine learning to work on large-scale datasets in service of solving a persistent problem in a novel way,” he said. That move is reflected in an explosion of publication around AI, not just because AI scientists are publishing more, but because people in a range of domains are using AI in their research and pushing the application of AI forward, he explained. ... According to the McKinsey report, industrializing machine learning (ML) “involves creating an interoperable stack of technical tools for automating ML and scaling up its use so that organizations can realize its full potential.” The report noted that McKinsey expects industrializing ML to spread as more companies seek to use AI for a growing number of applications. “It does encompass MLops, but it extends more fully to include the way to think of the technology stack that supports scaling, which can get down to innovations at the microprocessor level,” said Roberts. 


CISA: Prepare now for quantum computers, not when hackers use them

The main negative implication of this quantum computing concerns the cryptography of secrets, a fundamental element of information security. Cryptographic schemes that are today considered secure will be cracked in mere seconds by quantum computers, leaving persons, companies, and entire countries powerless against the computing supremacy of their adversaries. “When quantum computers reach higher levels of computing power and speed, they will be capable of breaking public key cryptography, threatening the security of business transactions, secure communications, digital signatures, and customer information,” explains CISA. This could threaten data in transit relating to top-secret communications, banking operations, military operations, government meetings, critical industrial processes, and more. Yesterday, China's Baidu introduced “Qian Shi,” an industry-level quantum supercomputer capable of achieving stable performance at 10 quantum bits of power.


How Are Business Intelligence And Data Management Related?

Business intelligence (BI) describes the procedures and tools that assist in getting helpful information and intelligence that can be used from data. A company’s data is accessed by business intelligence tools, which then display analytics and insights as reports, dashboards, graphs, summaries, and charts. Business intelligence has advanced significantly from its theoretical inception in the 1950s, and you must realize that it is not just a tool for big businesses. Most BI providers are tailoring their software to users’ needs because they recognize that our current era is considerably more oriented toward small structures like start-ups. SaaS, or software-as-a-service, vendors are incredibly guilty of this. Another issue is that it’s a more straightforward tool than it once was. It is still a professional tool; managing data is not simple, even with the most powerful technology. Nevertheless, BI has developed into something more accessible than local software, which used to require installation on every computer in the organization and may represent a sizable expenditure with the emergence of the Cloud and SaaS in the early 21st century.


Oxford scientist says greedy physicists have overhyped quantum computing

It’s unclear why Dr. Gourianov would leave big tech out of the argument entirely. There are dozens upon dozens of papers from Google and IBM alone demonstrating breakthrough after breakthrough in the field. Gourianov’s primary argument against quantum computing appears, inexplicably, to be that they won’t be very useful for cracking quantum-resistant encryption. With respect, that’s like saying we shouldn’t develop surgical scalpels because they’re practically useless against chain mail armor. Per Gourianov’s article: Shor’s algorithm has been a godsend to the quantum industry, leading to untold amounts of funding from government security agencies all over the world. However, the commonly forgotten caveat here is that there are many alternative cryptographic schemes that are not vulnerable to quantum computers. It would be far from impossible to simply replace these vulnerable schemes with so-called “quantum-secure” ones. This appears to suggest that Gourianov believes at least some physicists have pulled a bait-and-switch on governments and investors by convincing everyone that we need quantum computers for security.


Computer vision is primed for business value

In healthcare, computer vision is used extensively in diagnostics, such as in AI-powered image and video interpretation. It is also used to monitor patients for safety, and to improve healthcare operations, says Gartner analyst Tuong Nguyen. “The potential for computer vision is enormous,” he says. “It’s basically helping machines make sense of the world. The applications are infinite — really, anything you need to see. The entire world.” According to the fourth annual Optum survey on AI in healthcare, released at the end of 2021, 98% of healthcare organizations either already have an AI strategy or are planning to implement one, and 99% of healthcare leaders believe AI can be trusted for use in health care. Medical image interpretation was one of the top three areas cited by survey respondents where AI can be used to improve patient outcomes. The other two areas, virtual patient care and medical diagnosis, are also ripe for computer vision. Take, for example, idiopathic pulmonary fibrosis, a deadly lung disease that affects hundreds of thousands of people worldwide.


AI Therapy: Digital Solution to Address Mental Health Issues

AI for health has been a long-discussed topic specifically on therapy by bringing digital solutions to mental health issues. Some applications have already been, such as Genie in a Headset which manages human emotional behavior in work environments. But bringing AI into therapy means building an AI that feels and is keen to improve mental health issues. The fundamental objective of AI therapy is to assist patients in fighting mental illnesses. Ideally, this technology would be able to distinguish each patients needs and personalize their mental health programs through an efficient data collection process. ... Psychological therapy is a tough job that requires extracting confidential information from patients they hesitate to share. Like any other medical issue, it is essential to diagnose the problem before curing it. It requires exquisite skills to make someone comfortable. An AI therapist can access your cellphone, laptop, personal data, emails, all-day movement, and routine, making it more efficient in understanding you and your problems. Knowing problems in depth gives an AI-therapist advantage over the usual therapist.


What is the Microsoft Intelligent Data Platform?

The pieces that make up the Microsoft Intelligent Data Platform are services you may already be using because it includes all of Microsoft’s key data services, such as SQL Server 2022, Azure SQL, Cosmos DB, Azure Synapse, Microsoft Purview and more. But you’re probably not using them together as well as you could; the Intelligent Data Platform is here to make that easier. “These are the best-in-class services across what we consider the three core pillars of a data platform,” Mansour explained. According to Mansour, the Microsoft Intelligent Data Platform offers services for databases and operational data store, analytics, and data governance, providing authorized users with insight that will allow them to properly understand, manage and govern their business’s data. “Historically, customers have been thinking about each of those areas independent from one another, and what the Intelligent Data Platform does is bring all these pieces together,” said Mansour. Integrating databases, analytics and governance isn’t new either, but the point of presenting this as a platform is the emphasis on simplifying the experience of working with it. 


Threatening clouds: How can enterprises protect their public cloud data?

Public clouds don’t inherently impose security threats, said Gartner VP analyst Patrick Hevesi — in fact, hyperscale cloud providers usually have more security layers, people and processes in place than most organizations can afford in their own data centers. However, the biggest red flag for organizations when selecting a public cloud provider is the lack of visibility into their security measures, he said. Some of the biggest issues in recent memory: Misconfigurations of cloud storage buckets, said Hevesi. This has opened files up for data exfiltration. Some cloud providers have also had outages due to misconfigurations of identity platforms. This has affected their cloud services from starting up properly, which in turn affected tenants. Smaller cloud providers, meanwhile, have been taken offline due to distributed denial-of-service (DDoS) attacks. This is when perpetrators make a machine or network resource unavailable to intended users by disrupting services — either short-term or long-term — of a host connected to a network.



Quote for the day:

“Real integrity is doing the right thing, knowing that nobody’s going to know whether you did it or not.” -- Oprah Winfrey

Daily Tech Digest - August 27, 2022

Intel Hopes To Accelerate Data Center & Edge With A Slew Of Chips

McVeigh noted that Intel’s integrated accelerators will be complemented by the upcoming discrete GPUs. He called the Flex Series GPUs “HPC on the edge,” with their low power envelopes, and pointed to Ponte Vecchio – complete with 100 billion transistors in 47 chiplets that leverage both Intel 7 manufacturing processes as well as 5 nanometer and 7 nanometer processes from Taiwan Semiconductor Manufacturing Co – and then Rialto Bridge. Both Ponte Vecchio and Sapphire Rapids will be key components in Argonne National Labs’ Aurora exascale supercomputer, which is due to power on later this year and will deliver more than 2 exaflops of peak performance. .... “Another part of the value of the brand here is around the software unification across Xeon, where we leverage the massive amount of capabilities that are already established through decades throughout that ecosystem and bring that forward onto our GPU rapidly with oneAPI, really allowing for both the sharing of workloads across CPU and GPU effectively and to ramp the codes onto the GPU faster than if we were starting from scratch,” he said.


Performance isolation in a multi-tenant database environment

Our multi-tenant Postgres instances operate on bare metal servers in non-containerized environments. Each backend application service is considered a single tenant, where they may use one of multiple Postgres roles. Due to each cluster serving multiple tenants, all tenants share and contend for available system resources such as CPU time, memory, disk IO on each cluster machine, as well as finite database resources such as server-side Postgres connections and table locks. Each tenant has a unique workload that varies in system level resource consumption, making it impossible to enforce throttling using a global value. This has become problematic in production affecting neighboring tenants:Throughput. A tenant may issue a burst of transactions, starving shared resources from other tenants and degrading their performance. Latency: A single tenant may issue very long or expensive queries, often concurrently, such as large table scans for ETL extraction or queries with lengthy table locks. Both of these scenarios can result in degraded query execution for neighboring tenants. Their transactions may hang or take significantly longer to execute due to either reduced CPU share time, or slower disk IO operations due to many seeks from misbehaving tenant(s).


Quantum Encryption Is No More A Sci-Fi! Real-World Consequences Await

Quantum will enable enterprise customers to perform complex simulations in significantly less time than traditional software using quantum computers. Quantum algorithms are very challenging to develop, implement, and test on current Quantum computers. Quantum techniques also are being used to improve the randomness of computer-based random number generators. The world’s leading quantum scientists in the field of quantum information engineering, working to turn what was once in the realm of science fiction. Businesses need to deploy next-generation data security solutions with equally powerful protection based on the laws of quantum physics, literally fighting quantum computers with quantum encryption Quantum computers today are no longer considered to be science fiction. The main difference is that quantum encryption uses quantum bits or qubits comprised of optical photons compared to electrical binary digits or bits. Qubits can also be inextricably linked together using a phenomenon called quantum entanglement.


What Is The Difference Between Computer Vision & Image processing?

We are constantly exposed to and engaged with various visually similar objects around us. By using machine learning techniques, the discipline of AI known as computer vision enables machines to see, comprehend, and interpret the visual environment around us. It uses machine learning approaches to extract useful information from digital photos, movies, or other observable inputs by identifying patterns. Although they have the same appearance and sensation, they differ in a few ways. Computer vision aims to distinguish between, classify, and arrange images according to their distinguishing characteristics, such as size, color, etc. This is similar to how people perceive and interpret images. ... Digital image processing uses a digital computer to process digital and optical images. A computer views an image as a two-dimensional signal composed of pixels arranged in rows and columns. A digital image comprises a finite number of elements, each located in a specific place with a particular value. These so-called elements are also known as pixels, visual, and image elements.


Lessons in mismanagement

In the decades since the movie’s release, the world has become a different place in some important ways. Women are now everywhere in the world of business, which has changed irrevocably as a result. Unemployment is quite low in the United States and, by Continental standards, in Europe. Recent downturns have been greeted by large-scale stimuli from central banks, which have blunted the impact of stock market slides and even a pandemic. But it would be foolish to think that the horrendous managers and desperate salesmen of Glengarry Glen Ross exist only as historical artifacts. Mismanagement and desperation go hand in hand and are most apparent during hard times, which always come around sooner or later. By immersing us in the commercial and workplace culture of the past, movies such as Glengarry can help us understand our own business culture. But they can also help prepare us for hard times to come—and remind us how not to manage, no matter what the circumstances. ... Everyone, in every organization, has to perform. 


How the energy sector can mitigate rising cyber threats

As energy sector organisations continue expanding their connectivity to improve efficiency, they must ensure that the perimeters of their security processes keep up. Without properly secured infrastructure, no digital transformation will ever be successful, and not only internal operations, but also the data of energy users are bound to become vulnerable. But by following the above recommendations, energy companies can go a long way in keeping their infrastructure protected in the long run. This endeavour can be strengthened further by partnering with cyber security specialists like Dragos, which provides an all-in-one platform that enables real-time visualisation, protection and response against ever present threats to the organisation. These capabilities, combined with threat intelligence insights and supporting services across the industrial control system (ICS) journey, is sure to provide peace of mind and added confidence in the organisation’s security strategy. For more information on Dragos’s research around cyber threat activity targeting the European energy sector, download the Dragos European Industrial Infrastructure Cyber Threat Perspective report, here.


How to hire (and retain) Gen Z talent

The global pandemic has forever changed the way we work. The remote work model has been successful, and we’ve learned that productivity does not necessarily decrease when managers and their team members are not physically together. This has been a boon for Gen Z – a generation that grew up surrounded by technology. Creating an environment that gives IT employees the flexibility to conduct their work remotely has opened the door to a truly global workforce. Combined with the advances in digital technologies, we’ve seen a rapid and seamless transition in how employment is viewed. Digital transformation has leveled the playing field for many companies by changing requirements around where employees need to work. Innovative new technologies, from videoconferencing to IoT, have shifted the focus from an employee’s location to their ability. Because accessing information and managing vast computer networks can be done remotely, the location of workers has become a minor issue.


'Sliver' Emerges as Cobalt Strike Alternative for Malicious C2

Enterprise security teams, which over the years have honed their ability to detect the use of Cobalt Strike by adversaries, may also want to keep an eye out for "Sliver." It's an open source command-and-control (C2) framework that adversaries have increasingly begun integrating into their attack chains. "What we think is driving the trend is increased knowledge of Sliver within offensive security communities, coupled with the massive focus on Cobalt Strike [by defenders]," says Josh Hopkins, research lead at Team Cymru. "Defenders are now having more and more successes in detecting and mitigating against Cobalt Strike. So, the transition away from Cobalt Strike to frameworks like Sliver is to be expected," he says. Security researchers from Microsoft this week warned about observing nation-state actors, ransomware and extortion groups, and other threat actors using Sliver along with — or often as a replacement for — Cobalt Strike in various campaigns. Among them is DEV-0237, a financially motivated threat actor associated with the Ryuk, Conti, and Hive ransomware families; and several groups engaged in human-operated ransomware attacks, Microsoft said.


Data Management in the Era of Data Intensity

When your data is spread across multiple clouds and systems, it can introduce latency, performance, and quality problems. And bringing together data from different silos and getting those data sets to speak the same language is a time- and budget-intensive endeavor. Your existing data platforms also may prevent you from managing hybrid data processing, which, as Ventana Research explains, “enable[s] analysis of data in an operational data platform without impacting operational application performance or requiring data to be extracted to an external analytic data platform.” The firm adds that: “Hybrid data processing functionality is becoming increasingly attractive to aid the development of intelligent applications infused with personalization and artificial intelligence-driven recommendations.” Such applications are clearly important because they can be key business differentiators and enable you to disrupt a sector. However, if you are grappling with siloed systems and data and legacy technology that is unable to ingest high volumes of complex data fast so that you can act in the moment, you may believe that it is impossible for your business to benefit from the data synergies that you and your customers might otherwise enjoy.


How to Achieve Data Quality in the Cloud

Everybody knows data quality is essential. Most companies spend significant money and resources trying to improve data quality. However, despite these investments, companies lose money yearly because of insufficient data, ranging from $9.7 million to $14.2 million annually. Traditional data quality programs do not work well for identifying data errors in cloud environments because:Most organizations only look at the data risks they know, which is likely only the tip of an iceberg. Usually, data quality programs focus on completeness, integrity, duplicates and range checks. However, these checks only represent 30 to 40 percent of all data risks. Many data quality teams do not check for data drift, anomalies or inconsistencies across sources, contributing to over 50 percent of data risks. The number of data sources, processes and applications has exploded because of the rapid adoption of cloud technology, big data applications and analytics. These data assets and processes require careful data quality control to prevent errors in downstream processes. The data engineering team can add hundreds of new data assets to the system in a short period. 



Quote for the day:

"Problem-solving leaders have one thing in common: a faith that there's always a better way." -- Gerald M. Weinberg

Daily Tech Digest - August 26, 2022

CISA: Just-Disclosed Palo Alto Networks Firewall Bug Under Active Exploit

Bud Broomhead, CEO at Viakoo, says bugs that can be marshaled into service to support DDoS attacks are in more and more demand by cybercriminals -- and are increasingly exploited. "The ability to use a Palo Alto Networks firewall to perform reflected and amplified attacks is part of an overall trend to use amplification to create massive DDoS attacks," he says. "Google's recent announcement of an attack which peaked at 46 million requests per second, and other record-breaking DDoS attacks will put more focus on systems that can be exploited to enable that level of amplification." The speed of weaponization also fits the trend of cyberattackers taking increasingly less time to put newly disclosed vulnerabilities to work — but this also points to an increased interest in lesser-severity bugs on the part of threat actors. "Too often, our researchers see organizations move to patch the highest-severity vulnerabilities first based on the CVSS," Terry Olaes, director of sales engineering at Skybox Security, wrote in an emailed statement. 


Kestrel: The Microsoft web server you should be using

Kestrel is an interesting option for anyone building .NET web applications. It’s a relatively lightweight server compared to IIS, and as it’s cross-platform, it simplifies how you might choose a hosting platform. It's also suitable as a development tool, running on desktop hardware for tests and experimentation. There’s support for HTTPS, HTTP/2, and a preview release of QUIC, so your code is future-proof and will run securely. The server installs as part of ASP.NET Core and is the default for sites that aren’t explicitly hosted by IIS. You don’t need to write any code to launch Kestrel, beyond using the familiar WebApplication.CreateBuilder method. Microsoft has designed Kestrel to operate with minimal configuration, either using a settings file that’s created when you use dotnet new to set up an app scaffolding or when you create a new app in Visual Studio. Apps are able to configure Kestrel using the APIs in WebApplication and WebApplicationBuilder, for example, adding additional ports. As Kestrel doesn’t run until your ASP.NET Core code runs, this is a relatively easy way to make server configuration dynamic, with any change simply requiring a few lines of code. 


Private 5G networks bring benefits to IoT and edge

Private 5G's potential in enterprise use cases that involve IoT and edge computing is not without challenges that the industry must address; a production-level system requires many touchpoints. Private 5G networks must be planned, deployed, verified and managed by service providers, system integrators and IT teams. Edge computing is a combination of hardware and software. Each of these elements can fail, so they must be maintained and upgraded practically without any downtime, especially for real-time, mission-critical applications. Admins must manage edge deployments with containers or VM orchestration. Both public cloud vendors and managed open source vendors are addressing this space by providing a virtual edge computing framework for application developers. Public cloud vendors have also started to provide out-of-the-box edge infrastructure that runs the same software tools that run on their public cloud, which can make it easier for developers. For private 5G, IoT and edge to be successful, the industry must develop an extensive roadmap. Many of these solutions require long-term maintenance and upgrades.


Google is exiting the IoT services business. Microsoft is doing the opposite

Google will be shuttering its IoT Core service; the company disclosed last week. Its stated reason: Partners can better manage customers' IoT services and devices. While Microsoft also is relying heavily on partners as part of its IoT and edge-computing strategies, it is continuing to build up its stable of IoT services and more tightly integrate them with Azure. CEO Satya Nadella's "intelligent cloud/intelligent edge" pitch is morphing into more of an intelligent end-to-end distributed-computing play. ... Among Microsoft's current IoT offerings: Azure IoT Hub, a service for connecting, monitoring and managing IoT assets; Azure Digital Twins, which uses "spatial intelligence" to model physical environments; Azure IoT Edge, which brings analytics to edge-computing devices; Azure IoT Central; Windows for IoT, which enables users to build edge solutions using Microsoft tools. On the IoT OS front, Microsoft has Azure RTOS, its real-time IoT platform; Azure Sphere, its Linux-based microcontroller OS platform and services; Windows 11 IoT Enterprise and Windows 10 IoT Core -- a legacy IoT OS platform which Microsoft still supports but which hasn't been updated substantially since 2018.


Twitter's Ex-Security Chief Files Whistleblower Complaint

Zatko's complaint alleges that numerous security problems remained unresolved when he left. It also alleges that Twitter had been "penetrated by foreign intelligence agents," including Indian government agents as well as another, unnamed foreign intelligence agency. A federal jury recently found a former Twitter employee guilty of acting as an unregistered agent for Saudi Arabia while at the company. In his February final report to Twitter, Zatko alleged that "inaccurate and misleading" information concerning "Twitter's information security posture" had been transmitted to the company's risk committee, which risked the company making inaccurate reports to regulators, including the FTC. According to his report, the risk committee had been told that "nearly all Twitter endpoints (laptops) have security software installed." But he said the report failed to mention that of about 10,000 systems, 40% were not in compliance with "basic security settings," and 30% "do not have automatic updates enabled."


Announcing built-in container support for the .NET SDK

Containers are an excellent way to bundle and ship applications. A popular way to build container images is through a Dockerfile – a special file that describes how to create and configure a container image. ... This Dockerfile works very well, but there are a few caveats to it that aren’t immediately apparent, which arise from the concept of a Docker build context. The build context is a the set of files that are accessible inside of a Dockerfile, and is often (though not always) the same directory as the Dockerfile. If you have a Dockerfile located beside your project file, but your project file is underneath a solution root, it’s very easy for your Docker build context to not include configuration files like Directory.Packages.props or NuGet.config that would be included in a regular dotnet build. You would have this same situation with any hierarchical configuration model, like EditorConfig or repository-local git configurations. This mismatch between the explicitly-defined Docker build context and the .NET build process was one of the driving motivators for this feature. 


The Quantum Computing Threat: Risks and Responses

Asymmetric cryptographic systems are most at risk, implying that today’s public key infrastructure that form the basis of almost all of our security infrastructure would be compromised. That being said, the level of risk may be different depending on the data to be protected – for instance, a life insurance policy that will be valid for many years to come; a smart city that is built for our next generation. Similarly, the financial system, both centralized and decentralized, may have different vulnerabilities. For this reason, post-quantum security should be addressed as part of an organization’s overall cybersecurity strategy. It is of such importance that both the C-suite and the board should pay attention. While blockchain-based infrastructures are still considered safe, being largely hash-based, transactions are digitally signed using traditional encryption technologies such as elliptic curve and therefore could be quantum-vulnerable at the end points. Blockchain with quantum-safe features will no doubt gain more traction as NFTs, metaverse and crypto-assets continue to mature.


‘Post-Quantum’ Cryptography Scheme Is Cracked on a Laptop

It’s impossible to guarantee that a system is unconditionally secure. Instead, cryptographers rely on enough time passing and enough people trying to break the problem to feel confident. “That does not mean that you won’t wake up tomorrow and find that somebody has found a new algorithm to do it,” said Jeffrey Hoffstein, a mathematician at Brown University. Hence why competitions like NIST’s are so important. In the previous round of the NIST competition, Ward Beullens, a cryptographer at IBM, devised an attack that broke a scheme called Rainbow in a weekend. Like Castryck and Decru, he was only able to stage his attack after he viewed the underlying mathematical problem from a different angle. And like the attack on SIDH, this one broke a system that relied on different mathematics than most proposed post-quantum protocols. “The recent attacks were a watershed moment,” said Thomas Prest, a cryptographer at the startup PQShield. They highlight how difficult post-quantum cryptography is, and how much analysis might be needed to study the security of various systems.


Intel Adds New Circuit to Chips to Ward Off Motherboard Exploits

Under normal operations, once the microcontrollers activate, the security engine loads its firmware. In this motherboard hack, attackers attempt to trigger an error condition by lowering the voltage. The resulting glitch gives attackers the opportunity to load malicious firmware, which provides full access to information such as biometric data stored in trusted platform module circuits. The tunable replica circuit protects systems against such attacks. Nemiroff describes the circuit as a countermeasure to prevent the hardware attack by matching the time and corresponding voltage at which circuits on a motherboard are activated. If the values don't match, the circuit detects an attack and generates an error, which will cause the chip's security layer to activate a failsafe and go through a reset. "The only reason that could be different is because someone had slowed down the data line so much that it was an attack," Nemiroff says. Such attacks are challenging to execute because attackers need to get access to the motherboard and attach components, such as voltage regulators, to execute the hack.


Why Migrating a Database to the Cloud is Like a Heart Transplant

Your migration project’s enemies are surprises. There are numerous differences between databases from number conversions to date/time handling, to language interfaces, to missing constructs, to rollback behavior, and many others. Proper planning will look at all the technical differences and plan for them. Database migration projects also require time and effort, according to Ramakrishnan, and if they are rushed the results will not be what anyone wants. He recommended that project leaders create a single-page cheat sheet to break down the scope and complexity of the migration to help energize the team. It should include the project’s goals, the number of users impacted, the reports that will be affected by the change, the number of apps it touches, and more. Before embarking on the project, organizations should ask the following question: “How much will it cost to recoup the investment in the new database migration?” Organizations need to check that the economics are sound, and that means also analyzing the opportunity cost for not completing the migration.



Quote for the day:

"Do not follow where the path may lead. Go instead where there is no path and leave a trail." -- Muriel Strode

Daily Tech Digest - August 24, 2022

3 reasons cloud computing doesn’t save money

Without cloud spending visibility and insights, you’re basically driving a car without a dashboard. You don’t how fast you’re going or when you’re about to run out of gas. A guessing game turns into a big surprise when cloud spending is way above what everyone initially thought. That sucking sound you hear is the value that you thought cloud computing would bring now leaving the business. Second, there is no discipline or accountability. A lack of cloud cost monitoring means we can’t see what we’re spending. The other side of this coin is a lack of accountability. Even when a business monitors cloud spending, that data is useless if everyone knows there are no penalties. Why should people change their behavior? They need known incentives to conserve cloud computing resources as well as known consequences. Accountability problems can usually be corrected by leadership making some unpopular decisions. Trust me, you’ll either deal with accountability now or wait until later when it becomes much harder to fix.


How attackers use and abuse Microsoft MFA

The legitimate owner of a thusly compromised account is unlikely to spot that the second MFA app has been added. “It is only obvious if one specifically looks for it. If one goes to the M365 security portal, they will see it; but most users never go to that place. It is where you can change your password without being prompted for it, or change an authenticator app. In day-to-day use, people only change passwords when mandated through the prompt, or when they change their phone and want to move their authenticator app,” Mitiga CTO Ofer Maor told Help Net Security. Also, an isolated, random prompt for the second authentication factor triggered by the attacker can easily not be seen or ignored by the legitimate account owner. “They get prompted, but once the attacker authenticates on the other authenticator, that prompt disappears. There is no popup or anything that says ‘this request has been approved by another device’ (or something of that sort) to alert the user of the risk. ... ” Maor noted.


The emergence of the chief automation officer

AI and automation can transform IT and business processes to help improve efficiencies, save costs and enable people — employees — to focus on higher-value work. Two of the most important areas of IT operations in the enterprise are issue avoidance and issue resolution because of the massive impact they have on cost, productivity, and brand reputation. The rapid digital expansion among enterprises has led to an immediate uptick in demand from IT leaders to embrace AIops tools to increase workflow productivity and ensure proactive, continuous application performance. With AIops, IT systems and applications are more reliable, and complex work environments can be managed more proactively, potentially saving hundreds of thousands of dollars. This can enable IT staff to focus on high-value work instead of laborious, time-consuming tasks, and identify potential issues before they become major problems.


How a Service Mesh Simplifies Microservice Observability

According to Jay Livens, observability is the practice to capture the system’s current state based on the metrics and logs it generates. It’s a system that helps us with monitoring the health of our application, generating alerts on failure conditions, and capturing enough information to debug issues whenever they happen. ... A major aspect of observability is capturing network telemetry, and having good network insights can help us solve a lot of the problems we spoke about initially. Normally, the task of generating this telemetry data is up to the developers to implement. This is an extremely tedious and error-prone process that doesn’t really end at telemetry. Developers are also tasked with implementing security features and making communication resilient to failures. Ideally, we want our developers to write application code and nothing else. The complications of microservices networking need to be pushed down to the underlying platform. A better way to achieve this decoupling would be to use a service mesh like Istio, Linkerd, or Consul Connect.


IT talent: 4 interview questions to prep for

Whether managers have a more hands-on approach or allow their direct reports more autonomy, identifying this during the interview process is in the best interest of both parties. Additionally, some candidates thrive in an office, while others are hoping for a completely remote position or even a hybrid option. Discussing and defining preferences and working environments helps clarify candidates’ expectations for their roles. It also benefits hiring managers, prospective employees, and the companies, which can avoid high turnover rates by being transparent in their recruiting phase. ... people generally love to talk about things that make them proud. By asking this question, hiring managers allow candidates to talk about who they are as individuals rather than just what they bring to the larger business. Obviously, pride can encompass past work projects, but some candidates might also cite volunteer contributions, family achievements, or other accomplishments. Overall, candidates should always be prepared to discuss experiences that have contributed to their growth. 


Beyond purpose statements

Many CEOs are starting to sound like politicians, throwing around lofty language that is vague and hard to pin down. And therein lies the problem, or certainly the challenge: to remain credible and trustworthy, leaders need to shift the conversation from fuzzy purpose bromides to more tangible and concrete statements about the impact their companies are having on society. That is not simply a matter of semantics, as there is a world of difference between purpose and impact. It is difficult to challenge a purpose. If a company says its reason for existing in some form or fashion is to try to make the world a better place, how can you pressure-test that claim? If that company is providing goods or services that customers are willing to pay for, and it employs people and pays vendors, then, ipso facto, it is doing something that has a perceived value. As long as it’s not doing anything criminal or unethical, it’s working “to promote the good of the people,” to borrow the language from one organization’s mission statement. But if you are claiming that you are making an impact, then you need proof. And that’s what makes a statement powerful.


Managing Expectations: Explainable A.I. and its Military Implications

AI systems can be purposefully programmed to cause death or destruction, either by the users themselves or through an attack on the system by an adversary. Unintended harm can also result from inevitable margins of error which can exist or occur even after rigorous testing and proofing of the AI system according to applicable guidelines. Indeed, even ‘regular’ operations of deployed AI systems are mired with faults that are only discoverable at the output stage. ... A primary cause for such faults is flawed training datasets and commands, which can result in misrepresentation of critical information as well as unintended biases. Another, and perhaps far more challenging, reason is issues with algorithms within the system which are undetectable and inexplicable to the user. As a result, AI has been known to produce outputs based on spurious correlations and information processing that does not follow the expected rules, similar to what is referred to in psychology as the ‘Clever Hans effect’.


POCs, Scrum, and the Poor Quality of Software Solutions

It is generally accepted that quality is the ‘reliability of a product’. ‘Reliability’ though, as we are used to think of in classical science, is the attribute of consistently getting the same results under the same conditions. In this classical view, building a Quality solution means that we should build a product that never fails. Ironically, understanding reliability this way harms Quality instead of achieving it. Aiming to build a product that never fails can only result in extremely complex systems that are hard to maintain causing Quality to degrade over time. The issue with reliability in this classical sense is the false assumption that we control all conditions, while in fact we don’t (hardware failure, network latency, external service throttling…etc.). We need to extend the meaning of reliability to also accommodate for cases when the conditions are not aligned: Quality is not only a measure of how reliable a software product is when it is up & running, but also a measure of how reliable it is when it fails. 


Critical infrastructure is under attack from hackers. Securing it needs to be a priority - before it's too late

In order to protect networks – and people – from the consequences of attacks, which could be significant, many of the required security measures are among the most commonly recommended and often simplest practices. ... Cybersecurity can become more complex for critical infrastructure, particularly when dealing with older systems, which is why it's vital that those running them know their own network, what's connected to it and who has access. Taking all of this into account, providing access only when necessary can keep networks locked down. In some cases, that might mean ensuring older systems aren't connected to the outside internet at all, but rather on a separate, air-gapped network, preferably offline. It might make some processes more inconvenient to manage, but it's better than the alternative should a network be breached. Incidents like the South Staffordshire Water attack and the Florida water incident show that cyber criminals are targeting critical infrastructure more and more. Action needs to be taken sooner rather than later to prevent potentially disastrous consequences not just for organizations, but for people too.


How to Nurture Talent and Protect Your IT Stars

Anderson adds building out growth and learning opportunities starts with the CTO. “That means ensuring we have learning and training goals identified, which is used as a critical element for annual performance expectations of our IT leaders and managers, not only for themselves, but for their staff,” he says. As Court notes, the company invests internally through the LIFT University with a cadre of continuing education and augmenting with external training. “For career growth, I recommend IT teams have a close reporting or partnership to the engineering and product teams,” Anderson adds. He says the rationale for this is simple -- as employees want to perfect their craft, they need to work for and with people that understand their craft, and push them to continually learn through team, project, and program collaboration. “As we all know, the one constant is that technology is constantly evolving, so continuous learning for employees, especially our IT team, is a must,” he says. SoftServe’s Semenyshyn says that closely monitoring employee burnout is a priority across the IT industry, pointing out the advantage of the IT business in a large global company is the possibility of rotations.



Quote for the day:

"Teamwork is the secret that make common people achieve uncommon result." -- Ifeanyi Enoch Onuoha

Daily Tech Digest - August 23, 2022

Unstructured data storage – on-prem vs cloud vs hybrid

Enterprises have responded to growing storage demands by moving to larger, scale-out NAS systems. The on-premise market here is well served, with suppliers Dell EMC, NetApp, Hitachi, HPE and IBM all offering large-capacity NAS technology with different combinations of cost and performance. Generally, applications that require low latency – media streaming or, more recently, training AI systems – are well served by flash-based NAS hardware from the traditional suppliers. But for very large datasets, and the need to ease movement between on-premise and cloud systems, suppliers are now offering local versions of object storage. The large cloud “superscalers” even offer on-premise, object-based technology so that firms can take advantage of object’s global namespace and data protection features, with the security and performance benefits of local storage. However, as SNIA warns, these systems typically lack interoperability between suppliers. The main benefits of on-premise storage for unstructured data are performance, security, plus compliance and control – firms know their storage architecture, and can manage it in a granular way.


What is CXL, and why should you care?

Eventually CXL it is expected to be an all-encompassing cache-coherent interface for connecting any number of CPUs, memory, process accelerators (notably FPGAs and GPUs), and other peripherals. The CXL 3.0 spec, announced last week at the Flash Memory Summit (FMS), takes that disaggregation even further by allowing other parts of the architecture—processors, storage, networking, and other accelerators—to be pooled and addressed dynamically by multiple hosts and accelerators just like the memory in 2.0. The 3.0 spec also provides for direct peer-to-peer communications over a switch or even across switch fabric, so two GPUs could theoretically talk to one another without using the network or getting the host CPU and memory involved. Kurt Lender, co-chair of the CXL marketing work group and a senior ecosystem manager at Intel, said, “It’s going to be basically everywhere. It’s not just IT guys who are embracing it. Everyone’s embracing it. So this is going to become a standard feature in every new server in the next few years.” So how will the application run in enterprise data centers benefit? 


Technology alone won’t solve your organizational challenges

Whatever your organization’s preference for team building, it should be carefully selected from a range of options, and it should be clear to everyone why the firm chose one particular structure over another and what’s expected of everyone participating. Start with desired outcomes and cultural norms, then articulate principles to empower action, and, finally, provide the skills and tools needed for success. ... Even in the most forward-thinking organizations, people want to know what a meeting is supposed to achieve, what their role is in that meeting, and if gathering people around a table or their screens is the most effective and efficient way to get to the desired outcome. Is there a decision to be made? Or is the purpose information sharing? Have people been given the chance to opt out if the above points are not clear? Asking these questions can serve as a rapid diagnostic for what you are getting right—and wrong—in your meetings. Poorly run meetings sap energy and breed mediocrity.


For developers, too many meetings, too little 'focus' time

That’s not to say that meetings aren’t important, but it makes sense for managers to find the right balance for their teams, said Dan Kador, vice president of engineering at Clockwise. “It's something that companies have to pay attention to and try to understand their meeting culture — what's working and what's not working for them." “It is important that teams get together to discuss things and make sure they are all on the same page, but often meetings are scheduled at regular intervals even if they aren’t necessary,” said Jack Gold Principal analyst & founder at J. Gold Associates. “We are all subjected to weekly meetings, or other intervals, where, even if there is nothing to discuss, the meeting takes place anyway. And some meeting organizers feel obligated to use up the entire scheduled time.” Of course, meeting overload is not just an issue for those writing code. “Too much time spent in meetings is not just a problem for developers,” said Gold. “It is a problem across the board for employees in many companies.”


How To Remain Compliant In The New Era Of Payment Security

To counter the threat of e-commerce skimming, the card companies are using the two tools they have in their arsenal again: by making stolen data worthless and by creating new technical security standards. To make stolen payment card data worthless, there’s a chip-equivalent technology for e-commerce called 3-D-Secure v2, which has already been rolled out in the EU. This technology requires something more than just the knowledge of the numbers printed on a payment card to make an online transaction. After entering their payment card data, the consumer may have to further confirm a purchase using a bank’s smartphone app or by entering a code received by SMS. Alongside this re-engineering of the payment system, the latest version of the Payment Card Industry Data Security Standard (PCI DSS) includes new technical requirements to prevent and detect e-commerce skimming attacks. PCI DSS applies to all entities involved in the payment ecosystem, including retailers, payment processors and financial institutions. Firstly, website operators will need to maintain an inventory of all the scripts included in their website and determine why the script is necessary.


Q&A: How Data Science Fits into the Cloud Spend Equation

The great thing about cloud is you use it when you need it. Obviously, you pay for using it when you need it but often times data science applications, especially ones you’re running over large datasets, aren’t running continuously or don’t need to be structured in a way that they run continuously. Therefore, you’re talking about a very concentrated amount of spend for a very short amount of time. Buying hardware to do that means your hardware sits idle unless you are very active about making sure you’re being very efficient in the utilization of that resource over time. One of the biggest advantages of cloud is that it runs and scales as you need it to. So even a tiny can run a massive computation and run it when they need to and not consistently. That adds challenges, of course. “I fired this thing off on Friday, I come back in on Monday and it’s still running, and I accidentally spent $6,000 this weekend. Oops.” That happens all the time and so much of that is figuring out how to establish guardrails. Sometimes data science gets treated like, “You know, they’re going to do whatever they need to.”


Advantages of open source software compared to paid equivalents

The strength of open source technology is the fact that these products are developed with an iterative approach by a large group of experts. Open source communities are made up of diverse sets of people from across the world. This kind of diversity is beneficial because ideas and issues get vetted in multiple ways. From an enterprise perspective, open source software is a safe investment because you know there is a dedicated community with product experience. Many developers aren’t working for money, and are easy to approach and ask for help. You can raise questions or concerns directly with developers, or opt to obtain a paid support plan through the community for highly technical inquiries. ... Of course, since open source products are designed for a large audience, sometimes they won’t be able to perfectly fit a company’s needs. Fortunately, the open source approach encourages customisation and integration, meaning your own internally teams can start with an open source baseline and tweak it. Improvements can also be fed back into the open source development cycle.


3 steps for CIOs to build a sustainable business

Data is key. To establish a baseline, the CIO must measure the impact of the enterprise’s full technology stack, including outside partners and providers. This requires asking for, extracting, and reconciling data across external parties – and remembering to aggregate more than just decarbonization data. Cloud and sourcing choices and the disposition of assets after a cloud migration contribute to the carbon footprint. The CIO must also guide employees to make good sustainability choices. One example: according to Cisco, there are 27.1 billion devices connected to the internet – that’s more than three devices for every person on the planet. Many enterprise employees carry two mobile phones but don’t need to – existing technology enables them to segment two different environments on one device. Also, organizations with service contacts can reject hardware refreshes from a contract, empowering employees to decide if they need a new device or just a new battery.


Architecture and Governance in a Hybrid Work Environment

Architects can’t architect if they don’t speak to other people. Likewise governance isn’t effective if you are talking best practice to yourself alone in a dark room someplace. Getting this right in normal times isn’t always easy. People have meetings, they are working hard and don’t want to be disturbed, they need their coffee from the corporate cafeteria or the Starbucks down the street, they’re at lunch or they’re leaving at 430 to get to their kid’s baseball game. In short, it isn’t always possible in normal times to round people up and have a day-long whiteboard session on architecture. With hybrid working models, it is even more difficult because we can’t simply walk over to the cube next to us and have a conversation. In fact, most of the time we have no idea where people actually are or what they’re doing. We rely on text, chat, Teams, Outlook and other tools to give us a sense of whether someone has 5 minutes to chat. If you want a 3 hour whiteboard session, that involves a high degree of coordination with people’s calendars in Outlook. Even then people always seem to have ‘hard stops’ at times that are really incompatible with thinking and design sessions.


Karma Calling: LockBit Disrupted After Leaking Entrust Files

Given the damage and disruption being caused by LockBit and other ransomware groups, one obvious question is why these gangs aren't being disrupted with greater frequency, says Allan Liska, principal intelligence analyst at Recorded Future. "We all know these sites are MacGyvered together with bailing wire and toothpicks and are rickety as hell. We should do stuff like this to impose cost on them," Liska says. Some members of the information security community prefer stronger measures, of the "Aliens" protagonist Ripley variety. "I always say: go kinetic and solve the problem permanently," says Ian Thornton-Trump, CISO of Cyjax. "Attribution is for the lawyers. I recommend a strike from orbit, it's the only way to be sure," he says. Another explanation for the attack would be one or more governments opting to "impose costs" on the ransomware gang, say Brett Callow, a threat analyst at Emsisoft. As he notes, the imposing-costs phrase is a direct quote from Gen. Paul M. Nakasone, the head of Cyber Command, who last year told The New York Times that the military has been tasked with not just helping law enforcement track ransomware groups, but also to disrupt them.



Quote for the day:

"The manager has a short-range view; the leader has a long-range perspective." -- Warren G. Bennis

Daily Tech Digest - August 22, 2022

Law Firm Cyber Risk: The 5 Ways Cybercriminals Most Likely Will Attack Your Computers — And 7 Things You Can Do

It’s always better to deal with security risks early on while they’re still small rather than later when they turn huge and cause massive woe. Indeed, a Voke Media survey found that 80% of companies hit by a data breach said they could have prevented it had they only hardened their systems by installing updates and security patches in a timely way. That’s something you too need to be doing, but if you don’t have IT staff trained to monitor, maintain and patch your computers, you will find it advantageous to entrust those tasks to a reputable outside service. This will save you time and greatly reduce the potential for installation errors (those that cause data losses, file corruption or even system crashes). ... Backing up safeguards your critical data against human error, illegitimate deletion, programmatic errors, malicious insiders, malware and hackers. Cloud-to-cloud SaaS backup is ideal — especially if it’s fully automated, HIPAA compliant, running nonstop in the background and employing multiple layers of operational and physical security.


The rise of the data lakehouse: A new era of data value

Gartner’s Ronthal sees the evolution of the data lake to the data lakehouse as an inexorable trend. “We are moving in the direction where the data lakehouse becomes a best practice, but everyone is moving at a different speed,” Ronthal says. “In most cases, the lake was not capable of delivering production needs.” Despite the eagerness of data lakehouse vendors to subsume the data warehouse into their offerings, Gartner predicts the warehouse will endure. “Analytics query accelerators are unlikely to replace the data warehouse, but they can make the data lake significantly more valuable by enabling performance that meets requirements for both business and technical staff,” concludes its report on the query accelerator market. ... “We do see the future of warehouses and lakes coming into a lakehouse, where one system is good enough,” Yuhanna says. For organizations with distributed warehouses and lakes, the mesh architecture such as that of Starburst will fill a need, according to Yuhanna, because it enables organizations to implement federated governance across various data locations.


Devs don’t want to do ops

“The intention is not to put the burden on the developer, it is to empower developers with the right information at the right time,” Harness’s Durkin said. “They don’t want to configure everything, but they do want the information from those systems at the right time to allow operations and security and infrastructure teams to work appropriately. Devs shouldn’t care unless something breaks.” Nigel Simpson, ex-director of enterprise technology strategy at the Walt Disney Company, wants to see companies “recognize this problem and to work to get developers out of the business of worrying about how the machinery works—and back to building software, which is what they’re best at.” ... “Developer control over infrastructure isn’t an all-or-nothing proposition,” Gartner analyst Lydia Leong wrote. “Responsibility can be divided across the application lifecycle, so that you can get benefits from ‘you build it, you run it’ without necessarily parachuting your developers into an untamed and unknown wilderness and wishing them luck in surviving because it’s ‘not an infrastructure and operations team problem’ anymore.”


Defense-in-depth: a proven strategy to protect industrial assets

The first step to any effective OT-security program is building alignment between executives, business leaders, IT and operations. Start by bringing key stakeholders together to establish a clear understanding of business line requirements and critical-system interdependencies. You’ll need frequent and clear communication between OT, IT and engineering. ... Implement an IT/OT segmentation strategy. An IT/OT segmentation strategy separates ICS networks from enterprise networks to prevent bad actors from entering enterprise networks to access ICS devices. This segmentation model can integrate with an IT/OT integration demarcation zone (DMZ) for management tools, security tools and jump hosts, and can establish security zones to ensure devices are logically isolated to allow only required communications. ... Use multi-factor authentication. While most ICS devices can’t support the implementation of multi-factor authentication (MFA), this can still be a viable tool. A jump host that requires MFA can help prevent unauthorized access and direct connections from a lower-security network into a higher one.


How IoT and Metaverse Will Complement Each Other?

IoT devices often have a simple interface and interact with real-world devices. But standard IoT devices with screens may employ Metaverse to offer a 3D digital user experience. As a result, using IoT devices will give users a more immersive experience. The ability to stay present in real and virtual worlds will be available. As a result, companies can hire an IoT app developer to greatly customize the user interface and experience. As said above, the Metaverse will feel more akin to the physical world when IoT is used. More interaction between people and IoT devices and the intricate environment and processes of the Metaverse will be possible. We will be able to make better decisions with less learning and training, thanks to the immersive nature of the Metaverse and the real-world use cases. Effective for Long-Term Planning The amount of digital content derived from real-world objects, such as structures, people, cars, clothing, etc., constantly expands in the Metaverse. As a result, businesses aim to replicate our physical world exactly in cyberspace. 


Risk Transfer Is The Key To Successful AI

The most significant challenge, as it pertains to AI, that businesses face is inventing new workflows to leverage AI in existing or new business models, allowing them to significantly grow their market share within existing or new areas. New AI tools and technologies become disastrous distractions from business value. Instead, the business should focus on meaningful transfers of risk. The business will be able to add more customers and demand more for their services when they help customers reduce their own risk. The business’s AI solution then needs a clear transfer of risk itself. Without the AI solution, an expert within the business would be manually providing the service, but with the AI, the expert is more able to deliver the service at greater quality and/or greater scale. Another former colleague of mine at General Electric Global Research, Jim Bray, told me a long time ago that a large part of his value to the company was helping reduce risk around complex engineering and science. A significant contribution that AI scientists make for industrial businesses is in assessing risk and the likelihood of project success.


AI Song Contest: The Eurovision spin-off where music is written by machines

A good AI-generated song is the result of the hard work of entire teams of scientists and musicians who often struggle for months before reaching the desired tunes, making up algorithms and feeding ideas to the machine. The Galician team PAMP! - who came second at this year’s contest to Thailand’s song Enter Demons & Gods - took four months to create its track AI-Lalelo, a song which pays tribute to Galician women keeping the language, traditions and culture of the Spanish region alive. They started by getting the AI programme - an autoregressive language model called GPT-3 which uses deep learning to produce human-like text - to learn to speak Galician, a minority language estimated to be spoken by some 2.4 million people in northwestern Spain. “AI tools work in state languages, not in minority languages,” Joel Cava, Coordinator of the PAMP! Team and Creative Manager of CECUBO Group, told Euronews Next. “For the lyrics, we had to develop a corpus in Galician so that the machine (GPT-3) would learn to speak in our mother tongue”.


How Good Is Your Code Review Process?

An effective code review process starts with alignment on its objective. As a team, it’s important to determine which outcomes your review process is optimizing for. Is it catching bugs and defects, improving the maintainability of the codebase or increasing stylistic consistency? Maybe it’s less about the code and more about increasing knowledge sharing throughout the team? Determining priorities helps your team focus on what kind of feedback to leave or look for. Reviews that are intended to familiarize the reviewer with a particular portion of the codebase will look different from reviews that are guiding a new team member toward better overall coding practices. Once you know what an effective code review means for your team, you can start adjusting your code review activities to achieve those goals. The metrics indicating a healthy code review process differ right from the goals, but with that caveat, there are a few trends every team lead should monitor. Regularly reporting Time to First Review, Review Coverage, Review Influence and Review Cycles metrics will allow you to quickly diagnose and address problems with your code review process.


Security is hard and won’t get much easier

One major reason security is hard is it’s hard to secure a system without understanding the system in its entirety. As open source luminary Simon Willison posits, “Writing secure software requires deep knowledge of how everything works.” Without that fundamental understanding, he continues, developers may follow so-called “best practices” without understanding why they are such, which “is a recipe for accidentally making mistakes that introduce new security holes.” One common rejoinder is that we can automate human error out of development. Simply enforce secure defaults and security issues go away, right? Nope. “I don’t think the tools can save us,” Willison argues. Why? Because “no matter how good the default tooling is, if engineers don’t understand how it keeps them secure they’ll subvert it—without even meaning to or understanding why what they are doing is bad.” Additionally, no matter how good the tool, if it doesn’t fit seamlessly into security-minded processes, it will never be enough.


CIO Kristie Grinnell on creating a culture of transformation

One thing that we do is have people think about it as if this were your own business. Is this the decision that you would make? If you have one dollar, would you spend it on this technology? We need to recognize that we have that role, that power in IT. We should all be thinking that this is our ability to grow the business. Where am I going to put that dollar to get the most bang for my buck? I’m not just over here in IT and have to deliver to my budget. If I can give some of that back to go invest in something else, and it’s going make us grow, look what value IT just added. Or I might need to invest it in IT because that’s going to give us a new capability that helps us grow in a different way. So really thinking about, how do I run IT as a business and how do I think about that return on investment of every single dollar we spend is important.



Quote for the day:

"In simplest terms, a leader is one who knows where he wants to go, and gets up, and goes." -- John Erksine