Daily Tech Digest - June 26, 2022

Only 3% of Open Source Software Bugs Are Actually Attackable, Researchers Say

Making the determination of what's attackable comes by looking beyond the presence of open source dependencies with known vulnerabilities and examining how they're actually being used, says Manish Gupta, CEO of ShiftLeft. "There are many tools out there that can easily find and report on these vulnerabilities. However, there is a lot of noise in these findings," Gupta says. ... The idea of analyzing for attackability also involves assessing additional factors like whether the package that contains the CVE is loaded by the application, whether it is in use by the application, whether the package is in an attacker-controlled path, and whether it is reachable via data flows. In essence, it means taking a simplified threat modeling approach to open source vulnerabilities, with the goal of drastically cutting down on the fire drills. CISOs have already become all too familiar with these drills. When a new high-profile supply chain vulnerability like Log4Shell or Spring4Shell hits the industry back channels, then blows up into the media headlines, their teams are called to pull long days and nights figuring out where these flaws impact their application portfolios, and even longer hours in applying fixes and mitigations to minimize risk exposures.


The Power and Pitfalls of AI for US Intelligence

Depending on the presence or absence of bias and noise within massive data sets, especially in more pragmatic, real-world applications, predictive analysis has sometimes been described as “astrology for computer science.” But the same might be said of analysis performed by humans. A scholar on the subject, Stephen Marrin, writes that intelligence analysis as a discipline by humans is “merely a craft masquerading as a profession.” Analysts in the US intelligence community are trained to use structured analytic techniques, or SATs, to make them aware of their own cognitive biases, assumptions, and reasoning. SATs—which use strategies that run the gamut from checklists to matrixes that test assumptions or predict alternative futures—externalize the thinking or reasoning used to support intelligence judgments, which is especially important given the fact that in the secret competition between nation-states not all facts are known or knowable. But even SATs, when employed by humans, have come under scrutiny by experts like Chang, specifically for the lack of scientific testing that can evidence an SAT’s efficacy or logical validity.


Data Modeling and Data Models: Not Just for Database Design

The prevailing application-centric mindset has caused the fundamental problems that we have today, Bradley said, with multiple disparate copies of the same concept in system after system after system after system. Unless we replace that mindset with one that is more data-focused, the situation will continue to propagate, he said. ... Models have a wide variety of applicable uses and can present different levels of detail based on the intended user and context. Similarly, a map is a model that can be usedlike models are used in a business. Like data models, there are different levels of maps for different audiences and different purposes. A map of the counties in an election will provide a different view than a street map used for finding an address. A construction team needs a different type of detail on a map they use to connect a building to city water, and a lesson about different countries on a globe uses still another level of detail targeted to a different type of user. Similarly, some models are more focused on communication and others are used for implementation.


Microverse IDE Unveiled for Web3 Developers, Metaverse Projects

"With Microverse IDE, developers and designers collaboratively build low-latency, high-performance multiuser Microverse spaces and worlds which can then be published anywhere," the company said in a June 21 news release. As part of its Multiverse democratization effort, Croquet has open sourced its Microverse IDE Metaverse world builder and some related components under the Apache License Version 2.0 license so developers and adopters can examine, use and modify the software as needed. ... The California-based Croquet also announced the availability of its multiplane portal technology, used to securely connect independent 3D virtual worlds developed by different parties, effectively creating the Metaverse from independent microservices. These connections can even span different domains, the company said, thus providing safe, secure and decentralized interoperability among various worlds independent of the large technology platforms. "Multiplane portals solve a fundamental problem in the Metaverse with linking web-based worlds in a secure and safe way," the company said.


5 Firewall Best Practices Every Business Should Implement

Changes that impact your IT infrastructure happen every single day. You might install new applications, deploy additional network equipment, grow your user base, adopt non-traditional work practices, etc. As all this happens, your IT infrastructure’s attack surface will also evolve. Sure, you can make your firewall evolve with it. However, making changes to your firewall isn’t something you should take lightly. A simple mistake can take some services offline and disrupt critical business processes. Similarly, you could also expose ports to external access and compromise their security. Before you apply changes to your firewall, you need to have a change management plan. The plan should specify the changes you intend to implement and what you hope to achieve. ... Poorly configured firewalls can be worse than having no firewall, as a poorly installed firewall will give you a false sense of security. The same is true with firewalls without proper deployment planning or routine audits. However, many businesses are prone to these missteps, resulting in weak network security and a failed investment.


Debate over AI sentience marks a watershed moment

While it is objectively true that large language models such as LaMDA, GPT-3 and others are built on statistical pattern matching, subjectively this appears like self-awareness. Such self-awareness is thought to be a characteristic of artificial general intelligence (AGI). Well beyond the mostly narrow AI systems that exist today, AGI applications are supposed to replicate human consciousness and cognitive abilities. Even in the face of remarkable AI advances of the last couple of years there remains a wide divergence of opinion between those who believe AGI is only possible in the distant future and others who think this might be just around the corner. DeepMind researcher Nando de Freitas is in this latter camp. Having worked to develop the recently released Gato neural network, he believes Gato is effectively an AGI demonstration, only lacking in the sophistication and scale that can be achieved through further model refinement and additional computing power. The deep learning transformer model is described as a “generalist agent” that performs over 600 distinct tasks with varying modalities, observations and action specifications. 


Data Architecture Challenges

Most traditional businesses preserved data privacy by holding function-specific data in departmental silos. In that scenario, data used by one department was not available or accessible by another department. However, that caused a serious problem in the advanced analytics world, where 360-degrees customer data or enterprise marketing data are everyday necessities. Companies, irrespective of their size, type, or nature of business, soon realized that to succeed in the digital age, data had to be accessible and shareable. Then came data science, artificial intelligence (AI), and a host of related technologies that transformed businesses overnight. Today, an average business is data-centric, data-driven, and data-powered. Data is thought of as the new currency in the global economy. In this globally competitive business world, data in every form is traded and sold. For example, 360-degrees customer data, global sales data, health care data, and insurance history data are all available with a few keystrokes. A modern Data Architecture is designed to “eliminate data silos, combining data from all corners of the company along with external data sources.” 


One in every 13 incidents blamed on API insecurity – report

Lebin Cheng, vice president of API security at Imperva, commented: “The growing security risks associated with APIs correlate with the proliferation of APIs, combined with the lack of visibility that organizations have into these ecosystems. At the same time, since every API is unique, every incident will have a different attack pattern. A traditional approach to security where one simple patch addresses all vulnerabilities doesn’t work with APIs.” Cheng added: “The proliferation of APIs, combined with the lack of visibility into these ecosystems, creates opportunities for massive, and costly, data leakage.” ... By the same metric, professional services were also highly exposed to API-related problems (10%-15%) while manufacturing, transportation, and utilities (all 4-6%) are all in the mid-range. Industries such as healthcare have less than 1% of security incidents attributable to API-related security problems. Many organizations are failing to protect their APIs because it requires equal participation from the security and development teams, which have historically have been somewhat at odds. 


What Are Deep Learning Embedded Systems And Its Benefits?

Deep learning is a hot topic in machine learning, with many companies looking to implement it in their products. Here are some benefits that deep learning embedded systems can offer: Increased Efficiency and Performance: Deep learning algorithms are incredibly efficient, meaning they can achieve high-performance levels even when running on small devices. This means that deep learning embedded systems can be used to improve the performance of existing devices and platforms or to create new devices that are powerful and efficient. Reduced Size and Weight: Deep learning algorithms are often very compact and can be implemented on small devices without sacrificing too much performance or capability. This reduces the device’s size and weight, making it more portable and easier to use. Greater Flexibility: Deep learning algorithms can often exploit complex data sets to improve performance. This means deep learning embedded systems can be configured to work with various data sets and applications, giving them greater flexibility and adaptability.


State-Backed Hackers Using Ransomware as a Decoy for Cyber Espionage Attacks

The activity cluster, attributed to a hacking group dubbed Bronze Starlight by Secureworks, involves the deployment of post-intrusion ransomware such as LockFile, Atom Silo, Rook, Night Sky, Pandora, and LockBit 2.0. "The ransomware could distract incident responders from identifying the threat actors' true intent and reduce the likelihood of attributing the malicious activity to a government-sponsored Chinese threat group," the researchers said in a new report. "In each case, the ransomware targets a small number of victims over a relatively brief period of time before it ceases operations, apparently permanently." Bronze Starlight, active since mid-2021, is also tracked by Microsoft under the emerging threat cluster moniker DEV-0401, with the tech giant emphasizing its involvement in all stages of the ransomware attack cycle right from initial access to the payload deployment. ... The key victims encompass pharmaceutical companies in Brazil and the U.S., a U.S.-based media organization with offices in China and Hong Kong, electronic component designers and manufacturers in Lithuania and Japan, a law firm in the U.S., and an aerospace and defense division of an Indian conglomerate.



Quote for the day:

"Leadership has a harder job to do than just choose sides. It must bring sides together." -- Jesse Jackson

Daily Tech Digest - June 25, 2022

What Are CI And CD In DevOps And How Do They Work?

The purpose of continuous delivery is to put a packed item into production. The whole delivery process, including deployment, is automated using a CD. CD tasks may involve provisioning infrastructure, tracking changes (ticketing), deploying artifacts, verifying and tracking those changes, and ensuring that these changes do not occur if any problems arise. Certain parts of continuous delivery will be used by some firms to help them maintain their operational duties. A good example is employing a CD pipeline to handle infrastructure deployment. Some organizations will leverage their CD pipelines to coordinate infrastructure setup and configuration using configuration management automated processes such as Ansible, chef, or puppet. A CI/CD pipeline may appear to be overhead, but it is not. It is essentially an executable definition of the procedures that any developer must take in order to deliver a new edition of a software product. Without an automated pipeline, developers would have to complete these processes manually, which would be significantly less productive.


Why You Need to Be an Influencer Brand and the 3 Rs of Becoming One

Of course, brands creating content has been around for decades. Content marketing is creating and distributing valuable, relevant and consistent content to attract/retain an audience, driving profitable action. The difference is that influencer brands have shifted their entire orientation to a consumer-centric integrated marketing communications (IMC) mindset. Influencer brands go beyond blogs, infographics, eBooks, testimonials, and how-to guides that appeal to the head. They have learned to appeal to the heart of their audience. This comes from seeing the world from the target's perspective. A shift that can be seen following the three Rs of influence to direct brand content creation. For example, the focus of Yeti Coolers' content and engagement isn't selling coolers. It is selling a lifestyle that the coolers help enable. For example, they organize products so customers can shop by activity. Images and copy lead with stories of the adventures their audience can have with the gear — fishing, hunting, camping, by the coast, in the snow, on the ranch and in the rodeo arena.


3 certification tips for IT leaders looking to get ahead

If leveraged properly, certifications can also assist IT decision-makers in their key leadership responsibilities. For example, Puneesh Lamba, CIO of Shahi Exports, an apparel manufacturing company, acknowledges that “certifications have helped him perform better in board meetings, thereby making it easier to get approvals on IT spending.” “Typically, CIOs from large technology companies have strong IT skills but poor communications skills, while it’s just the opposite for CIOs in customer facing B2C companies. These technology leaders need to get certified in areas that they lack. While CIOs push their team to get certified, they need to come out of their comfort zones and follow suit,” says Chandra. But the benefits of certifications won’t accrue automatically. IT leaders seeking to advance their skills and careers need to build a strategy aimed at squeezing the maximum value out of what certifications can offer. Here, four CIOs share their experiences in pursuing certifications and offer advice on how to make the most of these valuable career advancement tools as an IT leader.


Magnetic superstructures as a promising material for 6G technology

The race to realize sixth generation (6G) wireless communication systems requires the development of suitable magnetic materials. Scientists from Osaka Metropolitan University and their colleagues have detected an unprecedented collective resonance at high frequencies in a magnetic superstructure called a chiral spin soliton lattice (CSL), revealing CSL-hosting chiral helimagnets as a promising material for 6G technology. The study was published in Physical Review Letters. Future communication technologies require expanding the frequency band from the current few gigahertz (GHz) to over 100 GHz. Such high frequencies are not yet possible, given that existing magnetic materials used in communication equipment can only resonate and absorb microwaves up to approximately 70 GHz with a practical-strength magnetic field. Addressing this gap in knowledge and technology, the research team led by Professor Yoshihiko Togawa from Osaka Metropolitan University delved into the helicoidal spin superstructure CSL.


Don’t fall into the personal brand trap

While you can try to emulate the positive qualities of branding, the truth is that rulebook wasn’t designed with you in mind. Brands are static creations, while you must be a dynamic participant in your life and career. Brands let the consensus of others dictate their values and meaning, while you must discover both for yourself. Brands chase consistency by reorienting to match the expectations of “consumers,” while you must have reserve room to grow and develop without a sense of self-fraudulence. Take the personal-branding prescription too far, and you run the risk of cementing your identity to the brand. New passions are unexplored. Fears and struggles must be ignored over concerns of not being “on brand.” And your life endeavors are filtered through the lens of marketability rather than the pursuit of their intrinsic worth.All of which can be counterproductive to your sense of authenticity. As one meta-analysis found, authenticity had a positive relationship with both well-being and engagement. But to achieve that, you must meet yourself as you are today, not who you were 10 years ago when you settled on your personal brand.


Is NextJS a Better Choice Than React in 2022?

If you know, React, you kind of know NextJS. This is because Next is a React framework.
You have components just like in React. CSS has a different naming convention, but that's the biggest change. The reason Next is so good is that it gives you options. If you want a page to have good SEO, you can use ServerSideProps. If you want to use CSR, you can use UseEffect to call your APIs, like React. Adding typescript to your Next project also is very simple. You even have a built-in router and don't have to use React router. The option to choose between CSR, SSR, and SSG is what makes Next the best. You even get a free trial on Vercel for your Next project. Now that you're convinced that you should Next.js, you might wonder how to change your existing website to Next. Next.js is designed for gradual adoption. Migrating from React to Next is pretty straightforward and can be done slowly by gradually adding more pages. You can configure your server so that everything under a specific subpath points to the Next.js app. If your site is abc.com, you can configure abc.com/about to serve a Next.js app. This has been explained really well in the Next.js docs.


How machine learning AI is going to change gaming forever

Obviously, machine learning techniques have broad implications for almost every sector of life, but how they will intersect across gaming has potentially some of the broadest implications for Microsoft as a business. One problem the video game industry generally faces right now pertains to the gap between expectations and investment. Video games are becoming increasingly complex to make, fund, and manage, as they explode in exponential complexity and graphical fidelity. We've seen absolutely insane Unreal Engine demos that showcase near-photorealistic scenes and graphics, but the manual labor involved to produce a full game based on some of these principles is truly palpable both in terms of time, and expense. What is typically thought of as "AI" in a gaming context generally hasn't been AI in the true sense of the word. Video game non-player characters (NPCs) and enemies generally operate on a rules-based model that often has to be manually crafted by a programmer. Machine learning models are importantly far more fluid, able to produce their own rules within parameters, and respond dynamically to new information on the fly.


Reflections about low-code data management

As more people began using the Internet, better tools and resources became available. Today, the market is full of low-code Content Management Systems (CMS) and drag-and-drop website builders (WordPress, HubSpot, Shopify, Squarespace, etc.) that make it easy to create a professional-looking website without any coding knowledge. While there are still a handful of very specific use cases where you would need to code a website from scratch, organizations realized that using a low-code CMS or drag-and-drop builder was a much better option in the vast majority of cases. This shift has led to a dramatic decrease in the amount of time and effort required to build a website. In fact, you can now create an entire website in just a few hours using these low-code tools. With every great shift comes some level of resistance. At first, web developers were skeptical of (or outright opposed to) low-code tools for the following reasons:Fear of Replacement: Developers saw these tools as a threat to their jobs. Power & Flexibility: Developers were unconvinced that they would be powerful, flexible, or customizable enough to produce the same quality of work. 


Inside the Metaverse: Architects See Opportunity in a Virtual World

“The metaverse is not an escape, and it's not a video game,” Patrik Schumacher, principal at Zaha Hadid Architects (ZHA), told RECORD. “It will become the immersive internet for corporations, for education, for retail, and also for socializing and networking in more casual arenas. Everything we are doing in the real world could potentially be substituted or augmented or paralleled with interactions in the metaverse.” ZHA was one of the first major firms to take the plunge into metaverse design. In early March, the firm announced that it would build an entire metaverse city—a digital version of the unrecognized, and as yet unbuilt, sovereign state “Liberland'' that was founded seven years ago by the right-wing Czech politician Vít Jedlička. “At the time, I was very frustrated with planning regulations and overbearing political constraints on city development,” says Schumacher, who has long fought against government intervention in urban development.


5 social engineering assumptions that are wrong

Users may be more inclined to interact with content if it appears to originate from a source they recognize and trust, but threat actors regularly abuse legitimate services such as cloud storage providers and content distribution networks to host and distribute malware as well as credential harvesting portals, according to Proofpoint. “Threat actors may prefer distributing malware via legitimate services due to their likelihood of bypassing security protections in email compared to malicious documents. Mitigating threats hosted on legitimate services continues to be a difficult vector to defend against as it likely involves implementation of a robust detection stack or policy-based blocking of services which might be business relevant,” the report read. ... There’s a tendency to assume that social engineering attacks are limited to email, but Proofpoint detected an increase in attacks perpetuated by threat actors leveraging a robust ecosystem of call center-based email threats involving human interaction over the telephone. “The emails themselves don’t contain malicious links or attachments, and individuals must proactively call a fake customer service number in the email to engage with the threat actor. ...”



Quote for the day:

"The ability to stay calm and polite, even when people upset you, is a superpower." -- Vala Afshar

Daily Tech Digest - June 24, 2022

Toward data dignity: Let’s find the right rules and tools for curbing the power of Big Tech

Enlightened new policies and legislation, building on blueprints like the European Union’s GDPR and California’s CCPA, are a critical start to creating a more expansive and thoughtful formulation for privacy. Lawmakers and regulators need to consult systematically with technologists and policymakers who deeply understand the issues at stake and the contours of a sustainable working system. That was one of the motivations behind the creation of the  >Ethical Tech Project —to gather like-minded ethical technologists, academics, and business leaders to engage in that intentional dialogue with policymakers. We are starting to see elected officials propose regulatory bodies akin to what the Ethical Tech Project was designed to do—convene tech leaders to build standards protecting users against abuse. A recently proposed federal watchdog would be a step in the right direction to usher in proactive tech regulation and start a conversation between the government and the individuals who have the know-how to find and define the common-sense privacy solutions consumers need.


For HPC Cloud The Underlying Hardware Will Always Matter

For a large contingent of those ordinary enterprise cloud users, the belief is that a major benefit of the cloud is not thinking about the underlying infrastructure. But, in fact, understanding the underlying infrastructure is critical to unleashing the value and optimal performance of a cloud deployment. Even more so, HPC application owners need in-depth insight and therefore, a trusted hardware platform with co-design and portability built in from the ground up and solidified through long-running cloud provider partnerships. ... In other words, the standard lift-and-shift approach to cloud migration is not an option. The need for blazing fast performance with complex parallel codes means fine-tuning hardware and software. That’s critical for performance and for cost optimization, says Amy Leeland, director of hyperscale cloud software and solutions at Intel. “Software in the cloud isn’t always set by default to use Intel CPU extensions or embedded accelerators for optimal performance, even though it is so important to have the right software stack and optimizations to unlock the potential of a platform, even on a public cloud,” she explains.


NSA, CISA say: Don't block PowerShell, here's what to do instead

Defenders shouldn't disable PowerShell, a scripting language, because it is a useful command-line interface for Windows that can help with forensics, incident response and automating desktop tasks, according to joint advice from the US spy service the National Security Agency (NSA), the US Cybersecurity and Infrastructure Security Agency (CISA), and the New Zealand and UK national cybersecurity centres. ... So, what should defenders do? Remove PowerShell? Block it? Or just configure it? "Cybersecurity authorities from the United States, New Zealand, and the United Kingdom recommend proper configuration and monitoring of PowerShell, as opposed to removing or disabling PowerShell entirely," the agencies say. "This will provide benefits from the security capabilities PowerShell can enable while reducing the likelihood of malicious actors using it undetected after gaining access into victim networks." PowerShell's extensibility, and the fact that it ships with Windows 10 and 11, gives attackers a means to abuse the tool. 


How companies are prioritizing infosec and compliance

This study confirmed our long-standing theory that when security and compliance have a unified strategy and vision, every department and employee within the organization benefits, as does the business customer,” said Christopher M. Steffen, managing research director of EMA. Most organizations view compliance and compliance-related activities as “the cost of business,” something they have to do to conduct operations in certain markets. Increasingly, forward-thinking organizations are looking for ways to maximize their competitive advantage in their markets and having a best-in-class data privacy program or compliance program is something that more savvy customers are interested in, especially in organizations with a global reach. Compliance is no longer a “table stakes” proposition: comprehensive compliance programs focused on data security and privacy can be the difference in very tight markets and are often a deciding factor for organizations choosing one vendor over another.”


IDC Perspective on Integration of Quantum Computing and HPC

Quantum and classical hardware vendors are working to develop quantum and quantum-inspired computing systems dedicated to solving HPC problems. For example, using a co-design approach, quantum start-up IQM is mapping quantum applications and algorithms directly to the quantum processor to develop an application-specific superconducting computer. The result is a quantum system optimized to run particular applications such as HPC workloads. In collaboration with Atos, quantum hardware start-up, Pascal is working to incorporate its neutral-atom quantum processors into HPC environments. NVIDIA’s cuQuantum Appliance and cuQuantum software development kit provide enterprises the quantum simulation hardware and developer tools needed to integrate and run quantum simulations in HPC environments. At a more global level, the European High Performance Computing Joint Undertaking (EuroHPC JU) announced its funding for the High-Performance Computer and Quantum Simulator (HPCQS) hybrid project. 


Australian researchers develop a coherent quantum simulator

“What we’re doing is making the actual processor itself mimic the single carbon-carbon bonds and the double carbon-carbon bonds,” Simmons explains. “We literally engineered, with sub-nanometre precision, to try and mimic those bonds inside the silicon system. So that’s why it’s called a quantum analog simulator.” Using the atomic transistors in their machine, the researchers simulated the covalent bonds in polyacetylene. According to the SSH theory, there are two different scenarios in polyacetylene, called “topological states” – “topological” because of their different geometries. In one state, you can cut the chain at the single carbon-carbon bonds, so you have double bonds at the ends of the chain. In the other, you cut the double bonds, leaving single carbon-carbon bonds at the ends of the chain and isolating the two atoms on either end due to the longer distance in the single bonds. The two topological states show completely different behaviour when an electrical current is passed through the molecular chain. That’s the theory. “When we make the device,” Simmons says, “we see exactly that behaviour. So that’s super exciting.”


Is Kubernetes key to enabling edge workloads?

Lightweight and deployed in milliseconds, containers enable compatibility between different infrastructure environments and apps running across disparate platforms. Isolating edge workloads in containers protects them from cyber threats while microservices let developers update apps without worrying about platform-level dependencies. Benefits of orchestrating edge containers with Kubernetes include:Centralized Management — Users control the entire app deployment across on-prem, cloud, and edge environments through a single pane of glass. Accelerated Scalability — Automatic network rerouting and the capability to self-heal or replace existing nodes in case of failure remove the need for manual scaling. Simplified Deployment — Cloud-agnostic, DevOps-friendly, and deployable anywhere from VMs to bare metal environments, Kubernetes grants quick and reliable access to hybrid cloud computing. Resource Optimization — Kubernetes maximizes the use of available resources on bare metal and provides an abstraction layer on top of VMs optimizing their deployment and use.


Canada Introduces Infrastructure and Data Privacy Bills

The bill sets up a clear legal framework and details expectations for critical infrastructure operators, says Sam Andrey, a director at think tank Cybersecure Policy Exchange at Toronto Metropolitan University. The act also creates a framework for businesses and government to exchange information on the vulnerabilities, risks and incidents, Andrey says, but it does not address some other key aspects of cybersecurity. The bill should offer "greater clarity" on the transparency and oversight into what he says are "fairly sweeping powers." These powers, he says, could perhaps be monitored by the National Security and Intelligence Review Agency, an independent government watchdog. It lacks provisions to protect "good faith" researchers. "We would urge the government to consider using this law to require government agencies and critical infrastructure operators to put in place coordinated vulnerability disclosure programs, through which security researchers can disclose vulnerabilities in good faith," Andrey says.


Prioritize people during cultural transformation in 3 steps

Addressing your employees’ overall well-being is also critical. Many workers who are actively looking for a new job say they’re doing so because their mental health and well-being has been negatively impacted in their current role. Increasingly, employees are placing greater value on their well-being than on their salary and job title. This isn’t a new issue, but it’s taken on a new urgency since COVID pushed millions of workers into the remote workplace. For example, a 2019 Buffer study found that 19 percent of remote workers reported feeling lonely working from home – not surprising, since most of us were forced to severely limit our social interactions outside of work as well. Leaders can help address this by taking actions as simple as introducing more one-to-one meetings, which can boost morale. One-on-one meetings are essential to promoting ongoing feedback. When teams worked together in an office, communication was more efficient mainly because employees and managers could meet and catch up organically throughout the day.


Pathways to a Strategic Intelligence Program

Strong data visualization capabilities can also be a huge boost to the effectiveness of a strategic intelligence program because they help executive leadership, including the board, quickly understand and evaluate risk information. “There’s an overwhelming amount of data out there and so it’s crucial to be able to separate the signal from the noise,” he says. “Good data visualization tools allow you to do that in a very efficient, impactful and cost-effective manner, and to communicate information to busy senior leaders in a way that is most useful for them.” Calagna agrees that data visualization tools play an important role in bringing a strategic intelligence to life for leaders across functions within any organization, helping them to understand complex scenarios and insights more easily than narrative and other report forms may permit. “By quickly turning high data volumes into complex analyses, data visualization tools can enable organizations to relay near real-time insights and intelligence that support better informed decision-making,” she says. Data visualization tools can help monitor trends and assumptions that impact strategic plans and market forces and shifts that will inform strategic choices.



Quote for the day:

"Patience puts a crown on the head." -- Ugandan Proverb

Daily Tech Digest - June 23, 2022

Microsoft’s framework for building AI systems responsibly

AI systems are the product of many different decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, we need to proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust. It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date. The Standard details concrete goals or outcomes that teams developing AI systems must strive to secure. These goals help break down a broad principle like ‘accountability’ into its key enablers, such as impact assessments, data governance, and human oversight. Each goal is then composed of a set of requirements, which are steps that teams must take to ensure that AI systems meet the goals throughout the system lifecycle


Success Demands Sacrifice. What Are You Willing to Give Up?

The key is to preplan your sacrifices rather than sacrifice parts of your life by default. Look at your normal schedule and think about where you could find the extra time and energy for your business, without sacrificing the things you value most in life. Maybe you decide to stay up later after the kids are in bed to get work done. Maybe you stop binge-watching on Hulu so you could get to the gym. Maybe you give up that second round of golf each week to spend more time with your spouse. Maybe you leave the office for a couple of hours to catch your kid's soccer game and come back later. Maybe you sacrifice some money to get extra help in for the business. Maybe you stop micro-managing everything in your business and actually delegate more responsibility to others. We all have areas in where we spend our time that we can tweak. You just have to decide what's right for you. You'll always have to sacrifice something to build a business or accomplish anything extraordinary in life. But giving up what you value most is not a good trade-off. Make sure you're making smart sacrifices by giving up what doesn't matter for things that do.


Microsoft to retire controversial facial recognition tool that claims to identify emotion

The decision is part of a larger overhaul of Microsoft’s AI ethics policies. The company’s updated Responsible AI Standards (first outlined in 2019) emphasize accountability to find out who uses its services and greater human oversight into where these tools are applied. In practical terms, this means Microsoft will limit access to some features of its facial recognition services (known as Azure Face) and remove others entirely. Users will have to apply to use Azure Face for facial identification, for example, telling Microsoft exactly how and where they’ll be deploying its systems. Some use cases with less harmful potential (like automatically blurring faces in images and videos) will remain open-access. ... " “Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ‘emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability,” wrote Microsoft’s chief responsible AI officer


The Unreasonable Effectiveness of Zero Shot Learning

OpenAI also has something for that. They have OpenAI CLIP, which stands for Contrastive Language-Image Pre-training. What this model does is that it brings together text and image embeddings. It generates an embedding for each text and it generates an embedding for each image, and these inputs are aligned to each other. The way this model was trained is that, for example, you have a set of images, like an image of a cute puppy. Then you have a set of text like, Pepper the Aussie Pup. The way it's trained is that hopefully the distance between the embedding of this picture of this puppy, and the embedding of the text, Pepper the Aussie Pup, that that is really close to each other. It's trained on 400 million image text pairs, which were scraped from the internet. You can imagine that someone did indeed put an image of a puppy on the internet, and didn't write under it, "This is Pepper the Aussie Pup."


Quantum Advantage in Learning from Experiments

Quantum computers will likely offer exponential improvements over classical systems for certain problems, but to realize their potential, researchers first need to scale up the number of qubits and to improve quantum error correction. What’s more, the exponential speed-up over classical algorithms promised by quantum computers relies on a big, unproven assumption about so-called “complexity classes” of problems — namely, that the class of problems that can be solved on a quantum computer is larger than those that can be solved on a classical computer.. It seems like a reasonable assumption, and yet, no one has proven it. Until it's proven, every claim of quantum advantage will come with an asterisk: that it can do better than any known classical algorithm. Quantum sensors, on the other hand, are already being used for some high-precision measurements and offer modest (and proven) advantages over classical sensors. Some quantum sensors work by exploiting quantum correlations between particles to extract more information about a system than it otherwise could have.


How AI is changing IoT

The cloud can’t scale proportionately to handle all the data that comes from IoT devices, and transporting data from the IoT devices to the cloud is bandwidth-limited. No matter the size and sophistication of the communications network, the sheer volume of data collected by IoT devices leads to latency and congestion. Several IoT applications rely on rapid, real-time decision-making such as autonomous cars. To be effective and safe, autonomous cars need to process data and make instantaneous decisions (just like a human being). They can’t be limited by latency, unreliable connectivity, and low bandwidth. Autonomous cars are far from the only IoT applications that rely on this rapid decision making. Manufacturing already incorporates IoT devices, and delays or latency could impact the processes or limit capabilities in the event of an emergency. In security, biometrics are often used to restrict or allow access to specific areas. Without rapid data processing, there could be delays that impact speed and performance, not to mention the risks in emergent situations.


A Huge Step Forward in Quantum Computing Was Just Announced: The First-Ever Quantum Circuit

The landmark discovery, published in Nature today, was nine years in the making. "This is the most exciting discovery of my career," senior author and quantum physicist Michelle Simmons, founder of Silicon Quantum Computing and director of the Center of Excellence for Quantum Computation and Communication Technology at UNSW told ScienceAlert. Not only did Simmons and her team create what's essentially a functional quantum processor, they also successfully tested it by modeling a small molecule in which each atom has multiple quantum states – something a traditional computer would struggle to achieve. This suggests we're now a step closer to finally using quantum processing power to understand more about the world around us, even at the tiniest scale. "In the 1950s, Richard Feynman said we're never going to understand how the world works – how nature works – unless we can actually start to make it at the same scale," Simmons told ScienceAlert. "If we can start to understand materials at that level, we can design things that have never been made before.


How to Handle Third-Party Cyber Incident Response

With tier-1 support, you have someone watching the stuff that is running. Their setup alerts them to the fact that something bad happened. They're gonna turn into a tier-2 person and say, “Hey, can you check this out and see if it really is something bad?” And so the tier-2 person takes a look. Maybe they'll take a look at that laptop or that part of the network or a server. If it wasn't a false alert, and it looks like bad behavior, then it goes to tier 3. Typically, the person running that is much more detailed and technical. They'll do a forensic analysis. And they look at all of the bits that are moving: the communication and what happened. They know adversary tactics, techniques, and procedures (TTP). They’re really good at tracking the adversary in the environment. When you're looking for a third-party incident response, and support agreement, you have to know what you, as a company, have the skills to do. Then you contract out for tier 2 or tier 3. They're going to come in and provide support. Service level agreements are critical. What are you expecting? The more you want, the more you're going to pay. 


IT leadership: 3 ways CIOs prevent burnout

“Prioritize yourself. It is not selfish; it’s an act of self-care. Set aside an ‘hour of power’ every day, first thing in the morning. During this hour, go analog and keep all digital distractions away. Protect that time fiercely and find an activity that nourishes your mind. For instance, learn something new and exciting, read some non-fiction that is energizing and inspiring, journal, or meditate. Find what works for you and do it every day. “Get moving. A healthy mind needs a healthy body. Do something, anything, to get some physical activity into your day. If dancing to disco is your thing, turn up the volume and go for it. Posting it on TikTok is optional, and maybe not advisable. “Stay connected. You are not alone – no matter what you’re going through, someone else has experienced it. Showing vulnerability is not a weakness, it is a strength. Build and nurture a close group of trusted advisors, preferably outside your company. Build relationships before you need them. Don’t be afraid to ask for help. They can help you work through challenges and provide an avenue to help others on this journey.”


Zscaler Posture Control Correlates, Prioritizes Cloud Risks

Zscaler Posture Control wants to make it easier for developers to take a hands-on approach to keeping their companies safe and incorporate best security practices during the development stage, according to Chaudhry. He says Zscaler hopes that 10% of its more than 5,600 customers will be using the company's entire cloud workflow protection offering within the next year. "Doing patch management after the application is built is extremely hard," Chaudhry says. "It was important for us to make sure that the developers are taking a more active role in their part of the security implementation." Zscaler wants to learn from the 210 billion transactions it processes daily to better remediate risk on an ongoing basis, addressing everything from unpatched vulnerabilities and overprivileged entitlements to Amazon S3 buckets that have erroneously been left open, Chaudhry says. Zscaler will put data points from these transactions into its artificial intelligence model to better protect customers going forward.



Quote for the day:

"Leadership is the creation of an environment in which others are able to self-actualize in the process of completing the job." -- John Mellecker

Daily Tech Digest - June 22, 2022

What you need to know about site reliability engineering

What is site reliability engineering? The creator of the first site reliability engineering (SRE) program, Benjamin Treynor Sloss at Google, described it this way: Site reliability engineering is what happens when you ask a software engineer to design an operations team. What does that mean? Unlike traditional system administrators, site reliability engineers (SREs) apply solid software engineering principles to their day-to-day work. For laypeople, a clearer definition might be: Site reliability engineering is the discipline of building and supporting modern production systems at scale. SREs are responsible for maximizing reliability, performance availability, latency, efficiency, monitoring, emergency response, change management, release planning, and capacity planning for both infrastructure and software. ... SREs should be spending more time designing solutions than applying band-aids. A general guideline is for SREs to spend 50% of their time in engineering work, such as writing code and automating tasks. When an SRE is on-call, time should be split between about 25% of time managing incidents and 25% on operations duty.


Are blockchains decentralized?

Over the past year, Trail of Bits was engaged by the Defense Advanced Research Projects Agency (DARPA) to examine the fundamental properties of blockchains and the cybersecurity risks associated with them. DARPA wanted to understand those security assumptions and determine to what degree blockchains are actually decentralized. To answer DARPA’s question, Trail of Bits researchers performed analyses and meta-analyses of prior academic work and of real-world findings that had never before been aggregated, updating prior research with new data in some cases. They also did novel work, building new tools and pursuing original research. The resulting report is a 30-thousand-foot view of what’s currently known about blockchain technology. Whether these findings affect financial markets is out of the scope of the report: our work at Trail of Bits is entirely about understanding and mitigating security risk. The report also contains links to the substantial supporting and analytical materials. Our findings are reproducible, and our research is open-source and freely distributable. So you can dig in for yourself.


Why The Castle & Moat Approach To Security Is Obsolete

At first, the shift in security strategy went from protecting one, single castle to a “multiple castle” approach. In this scenario, you’d treat each salesperson’s laptop as a sort of satellite castle. SaaS vendors and cloud providers played into this idea, trying to convince potential customers not that they needed an entirely different way to think about security, but rather that, by using a SaaS product, they were renting a spot in the vendor’s castle. The problem is that once you have so many castles, the interconnections become increasingly more difficult to protect. And it’s harder to say exactly what is “inside” your network versus what is hostile wilderness. Zero trust assumes that the castle system has broken down completely, so that each individual asset is a fortress of one. Everything is always hostile wilderness, and you operate under the assumption that you can implicitly trust no one. It’s not an attractive vision for society, which is why we should probably retire the castle and moat metaphor.  Because it makes sense to eliminate the human concept of trust in our approach to cybersecurity and treat every user as potentially hostile.


Improving AI-based defenses to disrupt human-operated ransomware

Disrupting attacks in their early stages is critical for all sophisticated attacks but especially human-operated ransomware, where human threat actors seek to gain privileged access to an organization’s network, move laterally, and deploy the ransomware payload on as many devices in the network as possible. For example, with its enhanced AI-driven detection capabilities, Defender for Endpoint managed to detect and incriminate a ransomware attack early in its encryption stage, when the attackers had encrypted files on fewer than four percent (4%) of the organization’s devices, demonstrating improved ability to disrupt an attack and protect the remaining devices in the organization. This instance illustrates the importance of the rapid incrimination of suspicious entities and the prompt disruption of a human-operated ransomware attack. ... A human-operated ransomware attack generates a lot of noise in the system. During this phase, solutions like Defender for Endpoint raise many alerts upon detecting multiple malicious artifacts and behavior on many devices, resulting in an alert spike.


Reexamining the “5 Laws of Cybersecurity”

The first rule of cybersecurity is to treat everything as if it’s vulnerable because, of course, everything is vulnerable. Every risk management course, security certification exam, and audit mindset always emphasizes that there is no such thing as a 100% secure system. Arguably, the entire cybersecurity field is founded on this principle. ... The third law of cybersecurity, originally popularized as one of Brian Krebs’ 3 Rules for Online Safety, aims to minimize attack surfaces and maximize visibility. While Krebs was referring only to installed software, the ideology supporting this rule has expanded. For example, many businesses retain data, systems, and devices they don’t use or need anymore, especially as they scale, upgrade, or expand. This is like that old, beloved pair of worn out running shoes that sit in a closet. This excess can present unnecessary vulnerabilities, such as a decades-old exploit discovered in some open source software. ... The final law of cybersecurity states that organizations should prepare for the worst. This is perhaps truer than ever, given how rapidly cybercrime is evolving. The risks of a zero-day exploit are too high for businesses to assume they’ll never become the victims of a breach.


How to Adopt an SRE Practice (When You’re not Google)

At a very high level, Google defines the core of SRE principles and practices as an ability to ’embrace risk.’ Site reliability engineers balance the organizational need for constant innovation and delivery of new software with the reliability and performance of production environments. The practice of SRE grows as the adoption of DevOps grows because they both help balance the sometimes opposing needs of the development and operations teams. Site reliability engineers inject processes into the CI/CD and software delivery workflows to improve performance and reliability but they will know when to sacrifice stability for speed. By working closely with DevOps teams to understand critical components of their applications and infrastructure, SREs can also learn the non-critical components. Creating transparency across all teams about the health of their applications and systems can help site reliability engineers determine a level of risk they can feel comfortable with. The level of desired service availability and acceptable performance issues that you can reasonably allow will depend on the type of service you support as well.


Are Snowflake and MongoDB on a collision course?

At first blush, it looks like Snowflake is seeking to get the love from the crowd that put MongoDB on the map. But a closer look is that Snowflake is appealing not to the typical JavaScript developer who works with a variable schema in a document database, but to developers who may write in various languages, but are accustomed to running their code as user-defined functions, user-defined table functions or stored procedures in a relational database. There’s a similar issue with data scientists and data engineers working in Snowpark, but with one notable exception: They have the alternative to execute their code through external functions. That, of course, prompts the debate over whether it’s more performant to run everything inside the Snowflake environment or bring in an external server – one that we’ll explore in another post. While document-oriented developers working with JSON might perceive SQL UDFs as foreign territory, Snowflake is making one message quite clear with the Native Application Framework: As long as developers want to run their code in UDFs, they will be just as welcome to profit off their work as the data folks.


Fermyon wants to reinvent the way programmers develop microservices

If you’re thinking the solution sounds a lot like serverless, you’re not wrong, but Matt Butcher, co-founder and CEO at Fermyon, says that instead of forcing a function-based programming paradigm, the startup decided to use WebAssembly, a much more robust programming environment, originally created for the browser. Using WebAssembly solved a bunch of problems for the company including security, speed and efficiency in terms of resources. “All those things that made it good for the browser were actually really good for the cloud. The whole isolation model that keeps WebAssembly from being able to attack the hosts through the browser was the same kind of [security] model we wanted on the cloud side,” Butcher explained. What’s more, a WebAssembly module could download really quickly and execute instantly to solve any performance questions, and finally instead of having a bunch of servers that are just sitting around waiting in case there’s peak traffic, Fermyon can start them up nearly instantly and run them on demand.


Metaverse Standards Forum Launches to Solve Interoperability

According to Trevett, the new forum will not concern itself with philosophical debates about what the metaverse will be in 10-20 years time. However, he thinks the metaverse is “going to be a mixture of the connectivity of the web, some kind of evolution of the web, mixed in with spatial computing.” He added that spatial computing is a broad term, but here refers to “3D modeling of the real world, especially in interaction through augmented and virtual reality.” “No one really knows how it’s all going to come together,” said Trevett. “But that’s okay. For the purposes of the forum, we don’t really need to know. What we are concerned with is that there are clear, short-term interoperability problems to be solved.” Trevett noted that there are already multiple standards organizations for the internet, including of course the W3C for web standards. What MSF is trying to do is help coordinate them, when it comes to the evolving metaverse. “We are bringing together the standards organizations in one place, where we can coordinate between each other but also have good close relationships with the industry that [is] trying to use our standards,” he said.


What We Now Know: Digital Transformation Reaches a Point of Clarity

Technology adoption, as part of digital transformation initiative, is generally of a greater scale and impact than what most are accustomed to, primarily because we are looking not only to revamp parts of our IT enterprise, but to also introduce brand new technology architecture environments comprised of a combination of heavy-duty systems. In addition to the due diligence that comes which planning for and incorporating new technology innovations, with digital transformation initiatives we need to be extra careful not to be lured into over-automation. The reengineering and optimization of our business processes in support of enhancing productivity and customer-centricity need to be balanced with practical considerations and the opportunity to first prove that a given enhancement is actually effective with our customers before building enhancements upon it. If we automate too much too soon, it will be painful to roll back, both financially and organizationally. Laying out a phased approach will avoid this.



Quote for the day:

"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell

Daily Tech Digest - June 21, 2022

Effective Software Testing – A Developer’s Guide

When there are decisions depending on multiple conditions (i.e. complex if-statements), it is possible to get decent bug detection without having to test all possible combinations of conditions. Modified condition/decisions coverage (MC/DC) exercises each condition so that it, independently of all the other conditions, affects the outcome of the entire decision. In other words, every possible condition of each parameter must influence the outcome at least once. The author does a good job of showing how this is done with an example. So given that you can check the code coverage, you must decide how rigorous you want to be when covering decision points, and crate test cases for that. The concept of boundary points is useful here. For a loop, it is reasonable to at least test when it executes zero, one and many times. It can seem like it should be enough to just do structural testing, and not bother with specification based testing, since structural testing makes sure all the code is covered. However, this is not true. Analyzing the requirements can lead to more test cases than simply checking coverage. For example, if results are added to a list, a test case adding one element will cover all the code. 


Inconsistent thoughts on database consistency

While linearizability is about a single piece of data, serializability is about multiple pieces of data. More specifically, serializability is about how to treat concurrent transactions on the same underlying pieces of data. The “safest” way to handle this is to line up transactions in the order they were arrived and execute them serially, making sure that one finishes before the next one starts. In reality, this is quite slow, so we often relax this by executing multiple transactions concurrently. However, there are different levels of safety around this concurrent execution, as we’ll discuss below. Consistency models are super interesting, and the Jepsen breakdown is enlightening. If I had to quibble, it’s that I still don’t quite understand the interplay between the two poles of consistency models. Can I choose a lower level of linearizability along with the highest level of serializability? Or does the existence of any level lower than linearizable mean that I’m out of the serializability game altogether? If you understand this, hit me up! Or better yet, write up a better explanation than I ever could :). If you do, let me know so I can link it here.


AI and How It’s Helping Banks to Lower Costs

Using AI helps banks lower the costs of predicting future trends. Instead of hiring financial analysts to analyze data, AI is used to organize and present data that the banks can use. They can get real-time data to analyze behaviors, predict future trends, and understand outcomes. With this, banks can get more data that, in turn, helps them make better predictions. ... Another advantage of using AI in the banking industry is that it reduces human errors. By reducing errors, banks prevent loss of revenue caused by these errors. Moreover, human errors can lead to financial data breaches. When this happens, critical data may get exposed to criminals. They can use the stolen data to use clients’ identities for fraudulent activities. Especially with a high volume of work, employees cannot avoid committing errors. With the help of AI, banks can reduce a variety of errors. ... AI helps banks save money by detecting fraudulent payments. Without AI, banks may lose millions because of criminal activities. But thanks to AI, banks can prevent such losses as the technology can analyze more than one channel of data to detect fraud.


Is NoOps the End of DevOps?

NoOps is not a one-size-fits-all solution. You know that it’s limited to apps that fit into existing serverless and PaaS solutions. Since some enterprises still run on monolithic legacy apps (requiring total rewrites or massive updates to work in a PaaS environment), you’d still need someone to take care of operations even if there’s a single legacy system left behind. In this sense, NoOps is still a way away from handling long-running apps that run specialized processes or production environments with demanding applications. Conversely, operations occurs before production, so, with DevOps, operations work happens before code goes to production. Releases include monitoring, testing, bug fixes, security, and policy checks on every commit, and so on. You must have everyone on the team (including key stakeholders) involved from the beginning to enable fast feedback and ensure automated controls and tasks are effective and correct. Continuous learning and improvement (a pillar of DevOps teams) shouldn’t only happen when things go wrong; instead, members must work together and collaboratively to problem-solve and improve systems and processes.


How IT Can Deliver on the Promise of Cloud

While many newcomers to the cloud assume that hyperscalers will handle most of the security, the truth is they don’t. Public cloud providers such as AWS, Google, and Microsoft Azure publish shared responsibility models that push security of the data, platform, applications, operating system, network and firewall configuration, and server-side encryption, to the customer. That’s a lot you need to oversee with high levels of risk and exposure should things go wrong. Have you set up ransomware protection? Monitored your network environment for ongoing threats? Arranged for security between your workloads and your client environment? Secured sets of connections for remote client access or remote desktop environments? Maintained audit control of open source applications running in your cloud-native or containerized workloads? These are just some of the security challenges IT faces. Security of the cloud itself – the infrastructure and storage – fall to the service providers. But your IT staff must handle just about everything else.


Distributed Caching on Cloud

Caching is a technique to store the state of data outside of the main storage and store it in high-speed memory to improve performance. In a microservices environment, all apps are deployed with their multiple instances across various servers/containers on the hybrid cloud. A single caching source is needed in a multicluster Kubernetes environment on cloud to persist data centrally and replicate it on its own caching cluster. It will serve as a single point of storage to cache data in a distributed environment. ... Distributed caching is now a de-facto requirement for distributed microservices apps in a distributed deployment environment on hybrid cloud. It addresses concerns in important use cases like maintaining user sessions when cookies are disabled on the web browser, improving API query read performance, avoiding operational cost and database hits for the same type of requests, managing secret tokens for authentication and authorization, etc. Distributed cache syncs data on hybrid clouds automatically without any manual operation and always gives the latest data. 


Bridging The Gap Between Open Source Database & Database Business

It is relatively easy to get a group of people that creates a new database management system or new data store. We know this because over the past five decades of computing, the rate of proliferation of tools to provide structure to data has increased, and it looks like at an increasing rate at that. Thanks in no small part to the innovation by the hyperscalers and cloud builders as well as academics who just plain like mucking around in the guts of a database to prove a point. But it is another thing entirely to take an open source database or data store project and turn it into a business that can provide enterprise-grade fit and finish and support a much wider variety of use cases and customer types and sizes. This is hard work, and it takes a lot of people, focus, money – and luck. This is the task that Dipti Borkar, Steven Mih, and David Simmen took on when they launched Ahana two years ago to commercialize the PrestoDB variant of the Presto distributed SQL engine created by Facebook, and no coincidentally, it is a similar task that the original creators of Presto have taken on with the PrestoSQL, now called Trinio, variant of Presto that is commercialized by their company, called Starburst.


Data gravity: What is it and how to manage it

Examples of data gravity include applications and datasets moving to be closer to a central data store, which could be on-premise or co-located. This makes best use of existing bandwidth and reduces latency. But it also begins to limit flexibility, and can make it harder to scale to deal with new datasets or adopt new applications. Data gravity occurs in the cloud, too. As cloud data stores increase in size, analytics and other applications move towards them. This takes advantage of the cloud’s ability to scale quickly, and minimises performance problems. But it perpetuates the data gravity issue. Cloud storage egress fees are often high and the more data an organisation stores, the more expensive it is to move it, to the point where it can be uneconomical to move between platforms. McCrory refers to this as “artificial” data gravity, caused by cloud services’ financial models, rather than by technology. Forrester points out that new sources and applications, including machine learning/artificial intelligence (AI), edge devices or the internet of things (IoT), risk creating their own data gravity, especially if organisations fail to plan for data growth.


CIOs Must Streamline IT to Focus on Agility

“Streamlining IT for agility is critical to business, and there’s not only external pressure to do so, but also internal pressure,” says Stanley Huang, co-founder and CTO at Moxo. “This is because streamlining IT plays a strategic role in the overall business operations from C-level executives to every employee's daily efforts.” He says that the streamlining of business processes is the best and most efficient way to reflect business status and driving power for each departmental planning. From an external standpoint, there is pressure to streamline IT because it also impacts the customer experience. “A connected and fully aligned cross-team interface is essential to serve the customer and make a consistent end user experience,” he adds. For business opportunities pertaining to task allocation and tracking, streamlining IT can help align internal departments into one overall business picture and enable employees to perform their jobs at a higher level. “When the IT system owns the source of data for business opportunities and every team’s involvement, cross team alignment can be streamlined and made without back-and-forth communications,” Huang says.


Open Source Software Security Begins to Mature

Despite the importance of identifying vulnerabilities in dependencies, most security-mature companies — those with OSS security policies — rely on industry vulnerability advisories (60%), automated monitoring of packages for bugs (60%), and notifications from package maintainers (49%), according to the survey. Automated monitoring represents the most significant gap between security-mature firms and those firms without a policy, with only 38% of companies that do not have a policy using some sort of automated monitoring, compared with the 60% of mature firms. Companies should add an OSS security policy if they don't have one, as a way to harden their development security, says Snyk's Jarvis. Even a lightweight policy is a good start, he says. "There is a correlation between having a policy and the sentiment of stating that development is somewhat secure," he says. "We think having a policy in place is a reasonable starting point for security maturity, as it indicates the organization is aware of the potential issues and has started that journey."



Quote for the day:

"No great manager or leader ever fell from heaven, its learned not inherited." -- Tom Northup