Daily Tech Digest - January 25, 2022

Open Source Democratized Software. Now Let’s Democratize Security

Now, imagine what would happen if the world of cybersecurity were democratized in the way that software development has been democratized by open source. It would create a world where security would cease to be the domain of elite security experts alone. Instead, anyone would be able to help identify the security problems that they face, then build the solutions they need to address them, just as software users today can envision the software platforms and features they’d like to see, then help implement them through open source projects. In other words, users wouldn’t need to wait on middlemen — that is, the experts who hold the reins — to build the security solutions they needed. They could build them themselves. That doesn’t mean that security experts would go away. They’d still be critical, just as professional programmers working for major corporations continue to play an important role alongside independent programmers in open source software development. But instead of requiring small groups of security specialists to identify all the cybersecurity risks and solve them on their own, these tasks would be democratized and shared across organizations as a whole.


A new language for quantum computing

Programming quantum computers requires awareness of something called “entanglement,” a computational multiplier for qubits of sorts, which translates to a lot of power. When two qubits are entangled, actions on one qubit can change the value of the other, even when they are physically separated, giving rise to Einstein’s characterization of “spooky action at a distance.” But that potency is equal parts a source of weakness. When programming, discarding one qubit without being mindful of its entanglement with another qubit can destroy the data stored in the other, jeopardizing the correctness of the program. Scientists from MIT’s Computer Science and Artificial Intelligence (CSAIL) aimed to do some unraveling by creating their own programming language for quantum computing called Twist. Twist can describe and verify which pieces of data are entangled in a quantum program, through a language a classical programmer can understand. The language uses a concept called purity, which enforces the absence of entanglement and results in more intuitive programs, with ideally fewer bugs.


Understanding Linus's Law for open source security

Linus's Law asserts that given enough eyeballs, all bugs are shallow, but we don't really know how many eyeballs are "enough." However, don't underestimate the number. Software is very often reviewed by more people than you might imagine. The original developer or developers obviously know the code that they've written. However, open source is often a group effort, so the longer code is open, the more software developers end up seeing it. A developer must review major portions of a project's code because they must learn a codebase to write new features for it. Open source packagers also get involved with many projects in order to make them available to a Linux distribution. Sometimes an application can be packaged with almost no familiarity with the code, but often a packager gets familiar with a project's code, both because they don't want to sign off on software they don't trust and because they may have to make modifications to get it to compile correctly. Bug reporters and triagers also sometimes get familiar with a codebase as they try to solve anomalies ranging from quirks to major crashes. 


UN launches privacy lab pilot to unlock cross-border data sharing benefits

The lab’s first use case will see NSOs share data relating to the import and export of certain commodities recorded between their own country and all other countries in the group. From here, each pair of countries will use PETs to discreetly check whether the amount of their bilateral trade corresponds or not. The learning exercise will use pre-approved, publicly available data, and will aim to ‘iron out’ any technical, security, or bureaucratic challenges. “Senior leaders are now talking about Privacy Enhancing Technologies to enable cross-border and cross-sector collaboration to solve shared challenges,” said Stefan Schweinfest, director of the UN Statistics Division. “At the same time, PETs will protect shared values such as privacy, accountability, and transparency. This is an important moment for PETs to help improve official statistics, and support democratic societies, honouring citizens’ entitlement to trusted public information.” Dr Jack Fitzsimons, founder of Oblivious, commented: When you send data to a server, there is well-established technology to make sure it lands at the right place. 


Surge in Malicious QR Codes Sparks FBI Alert

Menus, event ticket sales, quick site access — QR codes have become a common way to interact as a result of the COVID-19 pandemic. But the smart little matrix bar codes are easily tampered with and can be used to direct victims to malicious sites, the FBI warned in an alert. QR codes are the square, scannable codes familiar from applications like touchless menus at restaurants, and have gained in popularity over the pandemic as contactless interactions have become the norm. Simply navigating a smartphone camera over the image allows the device’s QR translator – built into most mobile phones – to “read” the code and open a corresponding website. “A victim scans what they think to be a legitimate code, but the tampered code directs victims to a malicious site, which prompts them to enter login and financial information,” the FBI alert explained. “Access to this victim information gives the cybercriminal the ability to potentially steal funds through victim accounts.”


Palo Alto Networks and IBM Are Automating 5G Security for Business Growth

The evolution of mobile networks has long focused on moving faster, and moving toward more distributed and abstracted network architectures. The payoff has been remarkable: networks that are massively scalable, extremely adaptable, and highly resilient. But modern networks are also incredibly dynamic and complex. It's impossible to manage them or to deploy a modern service-based architecture without also implementing the automation tools required to manage and maintain them. The same imperative applies to 5G security automation. As part of our network slice creation process, for example, IBM Cloud Pak for Network Automation serves as a master orchestrator, delivering the Network Slice Management Function (NSMF) to create a network slice across a cloud-native 5G network, running on Red Hat OpenShift. Security parameters are passed on to the orchestrator, and then instantiated and configured for the Palo Alto Networks CN-Series firewall. NSMF also deploys Prisma Cloud Compute Edition to protect the Kubernetes container environment supporting the core network functions. 


Red vs. blue vs. purple teams: How to run an effective exercise

“Red teams don’t just test for vulnerabilities, but do so using the tools, tips and techniques of their likely threat actors, and in campaigns that run continuously for an extended period of time,” wrote Daniel Miessler, a security consultant who has witnessed numerous red/blue exercises, in a blog post. “A great red team can be an early warning system to find common origins of attacks and to track an adversary’s techniques.” John, a retired IBM architect who has worked in large IT shops, tells CSO that “threats are going to emerge that red teams will never test for. There are threats that can overwhelm blue teams and possibly put companies out of business.” ... Let’s also talk about the color purple. This carries several different meanings, depending on how this team is constructed. The color gives you the idea that this is a combination of both red and blue teams, so that both can collaborate and improve their skills. This combination could mean that there are representatives from both sides working together on the exercise, or even as part of their jobs.


Top 3 cloud-based drivers of digital transformation in 2022

How people feel doesn’t change between being a consumer to being at work. They want options around hybrid work, equity and wellness. Much of the language today around the “Great Resignation,” or people leaving their jobs, is less about money and time at the office, and more about finding work with meaning that ultimately contributes to a better world. Beyond wanting to be heard at the workplace, people are curious to know how their work makes a positive contribution. For example, companies want more visibility into the carbon footprint of their technology platforms and options to reduce it, offering positive contributions that are appreciated by both staff and customers. This is in part in response to people bringing their values to work and companies responding to those values. We’ll see this increase moving forward. According to Deloitte, Gen Z is the first generation to make choices about the work they do based on personal ethics. And McKinsey says two-thirds of millennials take an employer’s social and environmental commitments into account when deciding where to work.


The Web3 Stack: What Web 2.0 Developers Need to Know

One of the trickiest parts of Web3 development is storing and using data. While blockchains are good at being “trustless” chains of immutable data, they are also incredibly inefficient at storing and processing large amounts of data — especially for dapps. This is where file storage protocols like IPFS, Arweave and Filecoin come in. Arweave is an open source project that describes itself as “a protocol that allows you to store data permanently, sustainably, with a single upfront fee.” It’s essentially a peer-to-peer (P2P) network, but has its own set of crypto buzzwords — its mining mechanism is called “Succinct Proofs of Random Access (SPoRAs)” and developers can deploy apps to the “permaweb” (“a permanent and decentralized web built on top of the Arweave”). To complicate matters further, dapp developers have the option to use “off-chain” solutions, where the data is stored somewhere other than the main blockchain. Two common forms of this are “sidechains” (secondary blockchains) and so-called “Layer 2” (L2) solutions, like Bitcoin Lightning Network and Ethereum Plasma.


Hybrid & Remote Work in 2022 and Beyond

Mental health and wellbeing is in the spotlight - with hundreds of thousands of people shifting to a new working environment in the midst of the chaos caused by the COVID-19 pandemic, anxiety, fear, sadness, anger and frustration are normal reactions. InfoQ reported on some resources and advice to help maintain mental health when under stress. We need to be kind to ourselves, and accept that these emotions will happen, without minimising or denying them. There are things that you can do to help overcome the stress; empathic responding is one way to positively deal with the stresses we all find ourselves under. Research shows that mental health is still not well addressed in most workplaces, mainly because it is still stigmatized in society despite impacting at least one in five people at any given time, and the importance of training managers to support the mental health of their teams. The World Health Organisation has stated that maintaining physical and mental health are key to resilience during the COVID-19 pandemic, and provides advice on how to look after yourself and support those around you.



Quote for the day:

"Leaders are people who believe so passionately that they can seduce other people into sharing their dream." -- Warren G. Bennis,

Daily Tech Digest - January 24, 2022

The Role of EiPaaS in Enterprise Architecture: Part 1

The fourth stage of the enterprise architecture emerged as a result of internal organizational changes and the external market outlook — mainly decentralized architecture styles (microservices and cloud native) and agile processes. Each function or LoB is looking for autonomy by recruiting their own technical teams and owning the entire lifecycle (plan, build, test, run, manage) of the systems and subsystems they make or buy. The enterprise architecture utilizes platforms running on internal and external cloud infrastructures to facilitate this. Multitenancy and segmentation are some of the techniques used to provide platform capabilities to each LoB. As a result, the integration logic and the implementation responsibility also move to each LoB. However, the platform approach of this fourth stage incorporates centralized governance, security, monitoring and standardization of technology and patterns. It is important to use platforms in such an environment; otherwise, LoBs will start building shadow IT applications private to the function and the IT team will lose control of those applications. 


Getting ahead of the curve on mitigating mobile fraud

Google and the other app store providers will continuously review their security procedures to make their platforms and devices more secure. But big tech companies like Google have to deal with so many new apps and updates constantly that it’s inevitable that many malicious apps may find their way onto the store. For a long time, too, there has been a case to educate customers about the threats they face. Banks make noticeable efforts to warn customers about potential threats like clicking suspicious links via SMS or email or not downloading anything to their device from an untrusted source. But the truth is, inevitably, someone will make a mistake as fraudsters use various techniques to gain a user’s trust. With apps seeming completely harmless, it’s all too easy for precisely this to happen. By the time banks warn their customers about specific threats, the likelihood is that fraudsters are already evolving beyond those techniques, finding new ways to fool their unsuspecting victims. 


IT talent and the Great Resignation: 8 ways to nurture retention

Technology employees have never had more opportunities than they do right now to advance their skills online, network at virtual events, and work remotely without relocating to tech hubs. They can dip their toes in multiple pools and switch streams relatively easily. And after months of toiling to keep their organizations going amid turbulent times, the urge to seek out calmer (or more rewarding) seas is strong. “IT professionals are highly valued members of company teams, and opportunity for these skilled individuals to develop or move on seems endless these days,” says Michele Bailey, author of The Currency Of Gratitude: Turning Small Gestures Into Powerful Business Results and CEO of The Blazing Group. “On top of that, the many changes and challenges brought by the pandemic have increased stress levels among us all. There is certainly plenty of reason for stressed-out IT leaders to look outside their existing roles for new opportunities and a better work-life balance.” For CIOs who want to retain their top talent, it can be a tough sell.


Kafka Or Pulsar? A Battle Of The Giants Concerning Streaming

The two-fold vision is, first, to build resiliency into software, such that loosely coupled services can be started, stopped, paused, or restarted as needed. By “services,” we mean the discrete programs that correspond to a cloud-native app’s constitutive functions. This makes it possible to scale cloud-native apps by adding or subtracting instances of services. Second, and concomitant with this, cloud-native design aims to make business services observable – i.e., susceptible to fine-grained control and manipulation – by humans and machines alike. You are not scaling servers, storage, and network capacity; you are, in effect, adjusting sliders that permit you to manipulate the behavior of the service. Human beings can do this, manually … but so can machines – automatically, in accordance with predefined rules. As I write in a separate piece (for a different venue) that has not yet been published: Observability instrumentation makes it easier for operations personnel to provision extra resources in response to an observed service impairment 


Intelligent Process Automation Can Give Your Company a Powerful Competitive Advantage

McKinsey defines IPA as “a collection of business-process improvements and modern technologies that combines fundamental process redesign with robotic process automation (RPA), artificial intelligence (AI), machine learning (ML), and cognitive technologies like optical character recognition (OCR) and natural language processing (NLP).” It helps organizations redesign processes and workflows in alignment with customer journeys for seamless experiences, digitize data for personalization and insights, and automate mundane tasks to achieve groundbreaking increases in productivity. In the world of operations, IPA is a Swiss Army knife. CEOs love its power to transform customer and employee experiences; CFOs appreciate its potential to grow efficiency exponentially; line-of-business leaders like the clear results it delivers; chief information officers embrace it as a digital accelerator and a way to demonstrate business outcomes. One U.S. health insurer, after adopting IPA across its enterprise, found it could process claims six times faster. 


Why Change Intelligence Is Necessary to Effectively Troubleshoot Modern Applications

To be able to truly gain the insights you require from your systems when problems arise, you need to add another piece to the puzzle - and that is Change Intelligence. Change Intelligence includes not only understanding when something has changed, but also why it has changed, who changed it, and what the impact of the change has had on your systems. The existing onslaught of data is often overwhelming for operations engineers. Therefore, Change Intelligence was introduced to provide a wider & broader context about the telemetry, and the information that you already have. For example, if you have three services talking to each other, and one of these services has an elevated error rate, this is a good indicator that something is wrong according to your telemetry. This is an excellent basis for suspecting something is wrong in your system, however the next and more critical step will always be to start digging to find the root cause that is the reason behind this anomalous telemetry data. 


Twitter: Head of Security Reportedly Fired; CISO to Leave

The social media platform in a memo shared with the employees accessed by The New York Times reportedly said, "The changes followed an assessment of how the organization was being led and the impact on top-priority work." Twitter's head of privacy engineering, Lea Kissner, will become the company's interim CISO, according to the report. Reportedly, after assuming the CEO position, Agrawal reorganized the management staff and dismissed Dantley Davis, the chief design officer, and Michael Montano, the head of engineering. In a previous filing with the Securities and Exchange Commission, Twitter said that Agrawal is restructuring the leadership team to drive increased accountability, speed and operational efficiency, and shifting to a general manager model for consumer, revenue and core technologies, which will be led by Kayvon Beykpour, Bruce Falck and Nick Caldwell, respectively. "These GMs will lead all core teams across engineering, product management, design, and research.


Fraud detection is great, but you also need prevention

A lot of the complexity of fraud detection comes from the fact that most fraud solutions focus solely on bad actors. They specialize in identifying the criminals by looking for suspicious factors. A new approach which is becoming more common is adding a stage before the fraud detection phase: positive validation. The overwhelming majority of customers are real people, with real, trustworthy histories and identities. If most of them can be identified confidently at the start, then the fraud detection problem becomes more manageable. All the fraud team’s resources can be spent on the cases where there’s real cause for doubt, and can use judicious friction where appropriate (such as email validation, or multi-factor authentication) in those cases. Positive validation has become a practical possibility partially due to online companies’ increased desire to collaborate with one another. Using providerless technology, generally based on some form of Privacy Enhancing Technology, companies can validate and vouch for trustworthy customers without sharing any personal user information with one another.


How quantum computing is helping businesses to meet objectives

According to Oberreuter, once a quantum computer becomes involved in the problem solving process, the optimal solution can really be found, allowing businesses to find the best arrangements for the problem. While current quantum computers, which are suitable for this kind of problems, called quantum annealers now have over 5,000 qubits, many companies that enlist Reply’s services often find that problems they have require more than 16,000-20,000 variables, which calls for more progress to be made in the space. “You can solve this by making approximations,” commented the Reply data scientist. “We’ve been writing a program that is determining an approximate solution of this objective function, and we have tested it beyond the usual number of qubits needed. “The system is set up in a way that prevents running time from increasing exponentially, which results in a business-friendly running time of a couple of seconds. This reduces the quality of the solution, but we get a 10-15% better result than what business heuristics are typically providing.”


EU Plans to Build Its Own DNS Infrastructure

A commission spokesperson tells Information Security Media Group, "This initiative addresses the lack of significant EU investment in free and public DNS resolution and enables the deployment of an alternative to existing solutions in a market that is characterized by a consolidation of this service in the hands of a few non-EU providers." The commission says this new DNS infrastructure proposition is crucial because "the processing of DNS data can have an impact on privacy and data protection rights" of internet users in EU. The deployment and usage of this new infrastructure means that data protection and privacy will be strictly governed by rules applicable in the EU - such as GDPR, among others - and this will "ensure that DNS resolution data are processed in Europe and personal data are not monetized." Currently, many DNS resolvers do not recognize EU privacy legislation, such as GDPR and ePrivacy, and could potentially allow operators to track user activity clandestinely and block or manipulate requests such as inserting advertisements and custom search results.



Quote for the day:

"No organization should be allowed near disaster unless they are willing to cooperate with some level of established leadership." -- Irwin Redlener

Daily Tech Digest - January 23, 2022

Blockchain As A Service

Businesses need to find ways in order to get ahead; by taking on new technology, they can gain an edge. Blockchain can be a very complicated technology, but with the right Blockchain as a service solution, it should not get in the way of your business goals and objectives. If you are looking to get started quickly without having to know too much about Blockchain or how Blockchain works, then make sure that your Blockchain platform is easy for you to use. Blockchain as a service should be easy to understand and use so that you can get up and running right away. Blockchain is easy enough for everyone, including business owners who may not have much experience working with blockchain technology. However, if you want your Blockchain platform to provide the possible functionality, ease of use is important. The Blockchain wants your business will run on may not be compatible with the Blockchain platform that you want to work on. Make sure that the Blockchain platform you choose is compatible with the technologies that your business uses.


Why Data Operations Platforms Can't Survive without Governance

Data operations (DataOps) is a data management methodology that improves an organization's efficiency and profitability by applying agile techniques to the workflows that derive business value from data. It leverages a combination of processes and technologies to reduce data friction, improve business agility, and increase security, integrity, and reliability throughout the data pipeline. However, to achieve increased productivity, a DataOps system has to ensure that the data is governed throughout its life cycle as it moves through increasingly complex pipelines and analytics workflows. Data governance is another data management concept that focuses on the necessary people, processes, and technologies to ensure the availability, usability, integrity, and security of enterprise data, based on internal data standards and policies. Effective data governance ensures that data is consistent, trustworthy, and appropriately used by data consumers. Data governance focuses on the organizational strategies, roles, and policies that define who can take what action on what data, in which situations, using what methods. 


5 considerations for saving more and wasting less on cloud services

Data on the historical usage of resources for a product or project is key to getting an understanding of the basic needs for your engineering and developer teams to plan and select commitments. However, it is often overlooked that past resource utilization will not always reflect future usage. Changes in business strategy, right-sizing or migration plans, and other external factors can lead to a drastic deviation from historical usage patterns. The net result is either over committing to resources and wasting money, or under-committing and missing savings opportunities. To anticipate potential deviations from historical usage, consider modeling the impact of different scenarios on usage and costs. Scenario planning is a nuanced activity that takes place between engineering, finance, and operations teams. Inputs such as right-sizing, migration, re-architecting, new projects, business growth, and financial best practices need to supplement historical data in estimating future costs and cloud service savings and usage to avoid over or under-commitment to services.


Vast Data Eyes A Role In The Data Center Beyond Storage

Vast Data views itself as a data company, he says. Looking at spectrum of storage systems available, some are more intelligent and evolved than others. “File systems are more evolved then than block storage systems are or J-Bots and what you have are essentially data management systems,” Denworth says. “We realized that we could take that far beyond the classic definitions of a file system, but the realization was that the architecture that has the most intimate understanding of data can make the best decisions about what to do with that data. First by determining what’s inside of it. Second of all, by either moving the data to where the compute is or the compute to where the data is, depending upon what the most optimized decision is at any given time.” There is a “classic IT stack that infrastructure teams have been running for forever, products like VMware and Oracle and EMC,” he says. 


The opportunities and challenges of data center industry in 2022

The data center industry is gearing up for broader adoption of liquid cooling tech in 2022. The demand from powerful new hardware for AI workloads, pressure to eliminate water use in cooling servers, and also the progress of liquid cooling tech are prompting an overhaul of the current systems. Microsoft has already started using immersion-cooled servers in production on its campus in Quincy, Washington. The company had been testing cooling technology used in bitcoin mining facilities. Here, servers are dunked in tanks of cooling fluid to manage rising heat. “We are the first cloud provider that is running two-phase immersion cooling in a production environment,” said Husam Alissa, principal hardware engineer, Microsoft’s data center team. In November, Digital Realty also started offering liquid-cooled servers in Digital Loyang 1 (SIN11) facility, Singapore and slashed the power consumption to up to 29 per cent compared with the conventional air-cooled servers. Advancements like Metaverse calls for dense compute, varying latencies, and massive storage requirements.


McAfee Bug Can Be Exploited to Gain Windows SYSTEM Privileges

According to the advisory, McAfee Agent “contains a privileged service that uses this OpenSSL component. A user who can place a specially crafted openssl.cnf file at an appropriate path may be able to achieve arbitrary code execution with SYSTEM privileges.” Dormann found that an unprivileged user could exploit the bug to place a specially crafted openssl.cnf in a location used by McAfee Agent and thus potentially be able to execute arbitrary code with SYSTEM privileges on a Windows system that has the vulnerable McAfee Agent software installed. When Dormann referred to an openssl.cnf, he was talking about an OpenSSL configuration file: a file that provides SSL defaults for items such as certificate files locations, and site details such as those entered during installation. The second bug in the Agent – tracked as CVE-2021-31854 and given a CVSS criticality rating of 7.7 – can be exploited by a local user to inject arbitrary shell code into a file, McAfee said in its advisory.


Cisco chases quantum tech

Of particular interest to Cisco is future development of quantum-based networks, data centers, and internet technologies. “Quantum networking could enable a new type of secure connection between digital devices, making them impenetrable to hacks,” Centoni stated. “As this type of foolproof security becomes achievable with quantum networking, it could lead to better fraud protection for transactions. In addition, this higher quality of secure connectivity may also be able to protect voice and data communications from any interference or snooping. All of these possibilities would re-shape the internet we know and use today.” Looking closer, Cisco’s vision is twofold--to build quantum data centers that could use classic local area network concepts to tie together quantum computers to communicate to solve big problems or a quantum-based network that transmitsquantum bits [qubits] from quantum servers at high-speeds to handle commercial-grade applications, said Ramana Kompella a Distinguished Engineer and the head of research in the Emerging Tech and Incubation group at Cisco.


Should we stop trying to make Open Banking happen when it’s not going to happen?

While Boden’s views dominated headlines at the end of last year, what hasn’t been covered in as much depth is what was said by others present, particularly those in the CMA9 – the group of nine banks that were the first to be involved in the UK’s Open Banking experiment. John Collins, chief legal and regulatory officer at Santander UK, commented on Open Banking and stated that, “Open banking has been slow, but it is a good example of banks and fintechs getting together and working out where the opportunity is for the consumer in the future. We need to focus on doubling down, with the infrastructure now in place, and proving the use case, not pursuing all the connoisseurs’ points and the smaller scenarios right now.” Collins here is saying that we can still use utilise the Open Banking initiatives, perhaps not for what was originally planned. Rather than focusing on making a concept that has been mandated happen, banks should be looking at how to use that technology effectively and ensure progress.


Bluprint Teaches Kids to Code JavaScript in the Metaverse

The Bluprint metaverse comes pre-populated with objects and characters, which have code that can quickly be viewed and altered. It gives users a different way to interact with and learn code, one that can be layered from one level of difficulty to the next. “We can actually scaffold the learning by saying: step one, look at the code and change something little about it (a string, a number) and then see what effect that has on its behavior. Or maybe add a new behavior by adding a couple of methods, or just activating them in some way, and see if that can work,” said Chopra. “It’s about changing something you see, it’s about adding something, and then, eventually, it’s about authoring things from scratch.” The Bluprint metaverse is built with security and privacy in mind for its users (seeing as they’re often below the age of 12), and the worlds that they build are invite-only, with parental controls built-in. In terms of coding safety, Bitsbox and Bluprint co-founder Scott Lininger explained that “Kids can write any JavaScript they like.


All in the Data: Ways to Improve Your Data

Data by itself has no meaning or context. If you are provided a piece of data – let’s say “01229” is that data – you have no knowledge of what that data represents without any description or information about that data. Is it a quantity, an amount, an address, a calculated field, or something completely different? The data, your data, has no meaning unless context is provided. That context, in the field of data management, is metadata. Now put yourself if the position of a corporate executive that is viewing their daily production dashboard, or a manager that is needs to make a decision based on data they receive in a report. These people must trust and have confidence in the data they use. That confidence comes from their knowledge about the data, or in other words, the metadata that helps the organization to improve its data. Organizations that strive to become data-centric or data-driven are introducing policies that spell out that data is an asset and how it will be governed. Organizations are concentrating on changing their cultures when it comes to data, and there is a push for organizations to become more data literate.



Quote for the day:

"Each day you are leading by example. Whether you realize it or not or whether it's positive or negative, you are influencing those around you." -- Rob Liano

Daily Tech Digest - January 22, 2022

The Rise of the Technical CEO

Today’s environment is a unique one for leaders. Businesses cannot afford for leadership to be focused on just one part of the business—the world is too interconnected and moves too quickly. Which is why we’ve moved onto the era of technologists as CEOs. Every company is a technology company in today’s digital-first world. Industries are constantly being disrupted by the next big thing, which means businesses need modern CEOs that are equally comfortable managing the business as they are with technology. As so many organizations look to navigate digital transformation journeys, having a leader at the helm who understands the importance not just of having technology, but having the right technology, is critical. Technology is a strategic advantage for today’s organizations. Without a leader who can make those nuanced decisions, it’s impossible to create solutions that will be useful for customers. And customers must always be at the center of any CEO’s decisions. Rocket’s solutions touch the lives of so many every day – from withdrawing money from an ATM to swiping your credit card at a convenience store, our technology is critical to ensuring the lives of millions run smoothly.


IT spending trends point to CIO innovation

Forward thinking, meanwhile, will spark an increase in long-term contracts that accommodate three-to-five-year planning horizons. Inflation and the war for talent also encourage extended contract periods, Lovelock noted. Longer-term deals offer CIOs greater certainty regarding cost and the availability of technical skills, he said. The skill shortage will also generate demand for external service providers such as consultants and MSPs. The Gartner forecast shows IT services growing to 7.9% year over year in 2022, hitting $1.3 trillion. The market watcher expects IT services' spending growth to trail only enterprise software, which tops the Gartner forecast with a projected 11% year-over-year increase. Business and technology consulting services will emerge as one of the fastest-growing sectors in IT services, growing at a 10% clip in 2022, Lovelock said. Cloud adoption will help drive that spurt. Gartner research suggests the vast majority of large organizations will hire external consultants to devise cloud strategies over the next few years.


ICO criticises government-backed campaign to delay end-to-end encryption

The privacy watchdog said end-to-end encryption plays an important role in safeguarding privacy and online safety, protecting children from abusers, and is crucial for business services. The intervention follows the launch of a government-funded campaign this week that warns that social media companies are “blinding themselves” to child sexual abuse by introducing end-to-end encrypted messaging services. Stephen Bonner, the ICO’s executive director of innovation, said the discussion on end-to-end encryption had become too unbalanced, with too much focus on the costs, without weighing up the significant benefits it offers. “E2EE serves an important role both in safeguarding our privacy and online safety,” he said. “It strengthens children’s online safety by not allowing criminals and abusers to send them harmful content. “It is also crucial for businesses, enabling them to share information securely and fosters consumer confidence in digital services.”


Looking Beyond Biden's Binding Security Directive

What is truly alarming, however, is how far behind many public and private organizations are with their patch management procedures. We frequently find known vulnerabilities in our customers' business-critical applications that are several years old and still unpatched. This directive looks to change that, ensuring agencies and their third-party vendors develop plans to find and remediate these known vulnerabilities. Multiple studies demonstrate that detecting vulnerabilities and prioritizing the right patches quickly and efficiently are the largest challenges. By establishing a prioritized catalog of vulnerabilities, the directive seeks to give federal agencies a leg up. The onus on establishing a plan and process for remediation, however, still remains with the individual federal agencies. Nevertheless, we're glad to see the Biden administration take this critical step forward in improving the cybersecurity posture of the United States and, by extension, the companies that provide services to the federal government. 


After ransomware arrests, some dark web criminals are getting worried

There's a consensus among cybersecurity experts that many of the major ransomware operations work out of Russia, with the authorities willing to turn a blind eye towards attacks targeting the West. But following arrests throughout the region, some cyber criminals are wondering if the risk is worth it. "This is a big change. I have no desire to go to jail," wrote one forum member. "In fact, one thing is clear, those who expect that the state would protect them will be greatly disappointed," said another. There's even concern that administrators of the dark web communities – who would have details about their users – could be coerced into working for law enforcement following arrest. Such is the paranoia among some forum members and ransomware affiliates that they suggest moving operations to a different jurisdiction, although this is unlikely to be a realistic option for many. "Those that are seasoned in cybercrime understand that by moving outside of Russia, they'll be taking on an even greater risk of being arrested by international law enforcement agencies. These agencies that are keeping tabs on cyber criminals will be watching for such potential moves," Ziv Mador.


The internet runs on free open-source software. Who pays to fix it?

“Tech companies, enterprises, anyone writing software is dependent on open-source,” says Wysopal. “Now there is a recognition at the highest levels of government that this is a big risk.” Easterly and other experts say that tech companies need to improve transparency. Adopting a Software Bill of Materials, as mandated by a 2021 executive order on cybersecurity from President Joe Biden, would help both developers and users better understand what is actually vulnerable to hacking when software flaws are discovered. Valsorda, who has managed to turn his own open-source work into a high-profile career, says that formalizing and professionalizing the relationship between developers and the big companies using their work could help. He advocates turning open-source work from a hobbyist pursuit into a professional career path so that critical infrastructure isn’t dependent on the spare time of a developer who already has a full-time job. And he argues that companies should develop systems to pay the people who maintain open-source projects their fair market value.


The Prometheus traffic direction system is a major player in malware distribution

The goal of such traffic direction systems is to redirect legitimate web users to malware, phishing pages, tech support scams, or other malicious operations. This is achieved by placing malicious scripts on compromised websites that intercept traffic or through malicious advertisements that are served to users on legitimate websites through ad networks. The main benefit of a TDS is that it allows cybercriminals to define redirection rules from an administration panel based on the type of visitors hitting the system's web of malicious landing pages. On compromised websites, Prometheus achieves this through a simple PHP backdoor script that fingerprints visitors -- browser, OS, timezone, language settings -- and sends the information back to a command-and-control server from where it pulls redirect instructions defined by attackers. This means that different categories of visitors can be redirected to different campaigns depending on the target audience the different groups renting TDS services want to reach and victims can also end up seeing localized scams in their language. 


Google finds a nation-state level of attacks on iPhone

The company behind the software used in these attacks, NSO, reportedly uses a fake GIF trick to target a vulnerability in the CoreGraphics PDF parser. The files have a .gif extension, but they are not GIF image files. The name is solely designed to keep a user from getting worried. “The ImageIO library is used to guess the correct format of the source file and parse it, completely ignoring the file extension. Using this fake gif trick, more than 20 image codecs are suddenly part of the iMessage zero-click attack surface, including some very obscure and complex formats, remotely exposing probably hundreds of thousands of lines of code.” As Google noted, these attacks are difficult to thwart. Blocking all GIF images is unlikely to prove effective. First, these files aren’t actually GIFs. The simplest approach is to block anything using a GIF extension, but the bad guys will simply switch to a different innocuous-sounding extension. ... nother Google point: “JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory.


FAQ: What's happening with 5G and airport safety?

The Federal Communications Commission (FCC) concluded in 2020 that studies warning of this danger did "not demonstrate that harmful interference would likely result under reasonable scenarios" or even "reasonably 'foreseeable' scenarios." Tom Wheeler, a visiting Brookings Institution fellow and former FCC head, said in a paper that he doesn't think there's a real technical problem. The long-term answer to this problem is to "improve the resilience of future radar altimeter designs to RF interference." In the meantime, Wheeler pointed out, "The FCC created a guard band between the 5G spectrum and the avionics spectrum in which 5G was forbidden. Boeing, in a filing with the FCC, had proposed just such a solution. The Boeing proposal was to prohibit 5G 'within the 4.1-4.2 GHz portion of the band.' The FCC agreed and then doubled the size of Boeing's proposed guard band to a 220 MHz interference buffer between the upper 5G usage at 3.98 GHz, and avionics usage at 4.2 GHz." That's all well and good, but the FAA and major US and international airlines aren't buying it.


Data Mesh Architecture Patterns

An Enterprise Data Mesh is composed of many components (lots more detail available here, here, and here). Data Products, the primary building block within a Data Mesh, contain operational, analytic, and/or engagement data which is synchronized across the organiation using an Enterprise’s Data Mesh. APIs are used to access data within a Data Product. To support federated governance, each Data Product contains an audit log that records data changes and a catalog of data it manages. An Enterprise’s Data Mesh has many Data Products. Data Products subscribe to each other’s data such that when one Data Product changes its data, this change is communicated to other Data Products using Change Data Capture and an Event Streaming Backbone. Lastly, an Enterprise Data Catalog — a synchronized aggregation of all Data Product catalogs and data changes– is used to make it easy for any user or developer to find, consume, and govern any data across the enterprise, while also providing the foundation for understanding data lineage across the enterprise.



Quote for the day:

"Leadership is the key to 99 percent of all successful efforts." -- Erskine Bowles

Daily Tech Digest - January 21, 2022

Nuclear quantum computing: It’s coming

You can’t just upload a neural network to a quantum computer and expect to act like it’s been supercharged. The algorithms we’re currently able to run on cutting-edge quantum systems are more like super-challenging math problems that can still be verified using classical means. Unfortunately, the long and short of it is usually: the more qubits you have the more errors you get. The new research hopes to alleviate that by creating a new way to handle qubit operations, thus allowing gate-based quantum computer systems to scale. ... It’s likely just as safe as using lasers to create qubits out of light, maybe even safer. But the researchers are hoping it’s the foundation for a paradigm that will be much easier to scale than other systems. At the end of the day this is all exciting news. It’s rare to see a peer-reviewed quantum computing breakthrough because the field is incredibly challenging. Getting three in the same day is a eureka moment in its own right. Of course, it could take a while for these early experiments to pan out and turn into full-fledged quantum computers. 


Upholding digital ethics with identity and access management

One area closely aligned with ensuring digital ethics and putting in place the right protocols to cope with our new digital processes is human resources (HR). This part of the business has had to make notable changes over the last couple of years, as it has started to rely more heavily on technology. During the pandemic, HR processes such as hiring, conflict resolution, onboarding and offboarding, and other HR-related activities could no longer follow the same face-to-face processes they had historically; workarounds were needed. HR managers had to interview via Zoom; they were required to handle conflict resolutions remotely and virtually, and so much more. Coupled with this HR teams had a new challenge: to re-invent their processes to fit the new virtual world – while ensuring that this environment has the right digital ethics for the organisation. This is where an identity and access management solution (IAM) can help less technical individuals. In applying digital ethics, security of personnel data is paramount for organisations, and IAM solutions can help make some important security requirements of remote working easier to overcome Let’s look at how an IAM solution can ensure the security, ethics and privacy of data.


Data Fabrics: Six Top Use Cases

Data fabrics are central data management frameworks that allow organizations to access their data from any endpoint within a hybrid cloud environment. “They use technologies and services to enrich the data and make it more useful for users,” explains David Proctor, senior database manager at Everconnect, which remote database administration and support. Data fabrics are becoming increasingly popular as organizations turn to digital storage methods. As a company grows, storage can become more complex as data is stored in different locations that are inaccessible to other parts of the organization, Proctor observes. “Data fabrics standardize … and make data accessible for everyone regardless of their location/position in the company.” In a nutshell, data fabric technology is the glue that binds all an organization’s data systems together into a cohesive and uniform layer, says Sean Knapp, founder and CEO of Ascend.io, which offers an autonomous dataflow service. It allows data engineers to build, scale, and operate continuously optimized, Apache Spark-based pipelines with less code. 


UK Issues Fresh Proposals to Tackle Cyberthreats

The government has sought to widen the scope of the law to include Managed Service Providers, which provide specialized online and digital services such as security services, workplace services and IT outsourcing. "These firms are crucial to boosting the growth of the country's 150.6-billion-pound digital sector and have privileged access to their clients' networks and systems," the report says. "While the regulations apply to some digital services such as online marketplaces, online search engines and cloud computing, there has been an increase in the use and dependence on digital services for providing corporate needs such as information storage, data processing and running software." Expanding NIS regulations to include MSPs will allow smaller businesses to attain a higher level of cyber resilience, says Tim Mackey, principal security strategist at the Synopsys Cybersecurity Research Center. The recent Log4Shell vulnerability has illustrated that cyber resilience is a function of how well software supply chains are understood, he says.


Quantum computing is coming. Now is the right time to start getting ready

Evidence suggests that message is already getting through: three-quarters (74%) of senior executives believe organisations that fail to adopt quantum computing soon will fall behind quickly, according to a recent survey by quantum company Zapata Computing and Wakefield Research. Di Meglio believes the secret to successfully understanding where your business might potentially create a quantum advantage is to focus on developments that are already being made around new instruments, tools, and methods of collaboration. He says early preparatory work will help CIOs and their businesses to identify the right skills, technologies and partners for quantum success in the longer term. As part of this process, CIOs and their executive partners must look to build collaborative teams, where all the necessary skills for quantum are brought together and then exploited in the most useful way. "Quantum computing is a very multidisciplinary area. Organisations, institutions and universities really need to work to break the silos in-between these areas," he says.


The importance of securing machine-to-machine and human-to-machine interaction

The challenge associated with interconnecting and providing the right level of access to disparate workloads introduces a host of new security and compliance challenges. For instance, the sheer number of secrets used by machine-to-machine and human-to-machine interactions has proliferated dramatically due to automation, containerization, DevOps initiatives, and so on. In this hybrid multicloud environment I explained above, there is a risk of having separate islands of secrets. It is difficult for security teams to see how many secrets are in use overall, who uses them, and where. And if they can’t see them, how can they ensure they are safe? Another challenge associated with the automation/DevOps trends is how secrets are used. It is too often that we see secrets hardcoded in source code or configuration files, in plain text, which are then uploaded to public repositories such as GitHub. These secrets, and especially the ones used by privileged users such as network or security admins, and DevOps engineers, have traditionally been managed by Privileged Access Management (PAM) solutions.


Open source creates value, but how do you measure it?

Beyond updating our understanding of innovation outputs with open source, there are many more innovation questions: How does open source software contribute to innovation as an input, and can targeted research funding for open source increase this contribution? Further research should build on initial measurement efforts[7] to understand how and to what extent open source software accelerates scientific research; As open source business models have evolved over time, how have firm contributions to open source changed? Amid these business innovations, particularly the rise of cloud-based software as a service, what is the relative contribution to open source from these big cloud companies?; How do we value the contributions of innovations in developer tools to open source, including maintainers’ productivity and workload? ...; What is the economic impact—at both an organizational and economy-wide level—of new institutional approaches to open source, including the Open Source Program Office, pioneered in industry that is now percolating into the public and social sectors?


Why Artificial Intelligence (AI) pilot projects fail: 4 reasons

Not every person working on an AI-based project is an AI genius. However, successfully deploying an AI solution requires a general understanding by every employee and end-user. Everyone within an organization should understand the possibilities and limitations. With a lack of knowledge by all involved comes a lack of deployment. ... Everyone from executives to employees needs open feedback loops to allow for discussions on AI and getting people acquainted with the solution. Those more familiar with AI then have the opportunity to clearly communicate the level of interaction it requires to ensure everyone has the correct information needed for maximum efficiency. Leading the change management to implement AI for digital transformation success is not limited to the role of a CIO or IT team. Instead, businesses as a whole need to work together to ensure every department has the proper tools and technologies in place to their respective standards.


Closing the agile achievement gap

The primary role of the lean portfolio management (LPM) function in agile-minded organizations is to align agile development with business strategy. In most cases, this function is made up of staff from the organization’s finance, IT, and business units, and also draws on expertise and input from human resources and IT teams. Most important, the LPM function aligns the annual planning and funding processes with the agile methodology. It also establishes objectives and key results and key performance indicators (KPIs) to measure the effectiveness of the work being done and to keep deliverables on track. These tasks are often time-consuming and involve large change management efforts, which is why the LPM function must be implemented early in the process. A wholesale retail company needed to define and implement an LPM function at the outset of its agile transformation. The company needed to modernize its workforce and IT operating model and employ a product-centric mindset on projects.


HR and data: what gets measured gets improved

Used wisely, data has colossal power. This was recognised by the management theorist, Peter Drucker, who reportedly said, “What gets measured gets improved”. The trick is to understand the value of data, measure the right things and then make sense of it all to inform decisions. And huge swathes of the economy are now doing so – often using AI – to drive innovation and accelerate growth. Sadly, HR is lagging. When searching the top HR degrees in the UK, few of them focus on data as a major part of the job. Out of 39 modules, over three years, one degree course lists “managing data” just once. And if you ask most people why they got into HR, it’s about relationships — making people’s working lives better, supporting others and helping employees thrive. These are all vital, but it often means data is ignored, despite it having a huge role to play in meeting these goals. This is a fact recognised by the CIPD. It says too few organisations use HR data and analytics to help inform strategic decisions about how they invest in, manage and develop their workforce to deliver on their business strategy.



Quote for the day:

"If you don't find a leader, perhaps it is because you were meant to lead." -- Glenn Beck

Daily Tech Digest - January 20, 2022

The lowdown on industry clouds

If you ask today’s CIOs why some applications won’t move to the cloud, they will mention issues such as a lack of systems that can deal with compliance, or vertically oriented applications and data that are too important to entrust to the cloud. Cloud providers now offer or will soon offer prebuilt, industry-specific features and services that typically will be better than anything companies could build and maintain themselves. The coming industry-specific world of cloud will move the needle enough for many enterprises to commit their critical data and applications to the cloud. The cloud providers understand this paradigm, and in many cases, the development and deployment of industry-specific cloud services may be a loss leader that will drive more adoptions. It’s important that we understand the likely motivations of the cloud providers before we adopt any cloud services, and I’ve made some educated assumptions here. There is always risk when you become too coupled to any cloud services because they will all go away at some point in time. 


Htmx: HTML Approach to Interactivity in a JavaScript World

Complexity in frontend web development is something that Gross has been attempting to address for nearly a decade now, having first created the intercooler.js alternative frontend library back in 2013, which came with the tagline “AJAX With Attributes: There is no need to be complex”. Recently, intercooler.js hit version 2.0 and became htmx, which the GithHub description says “allows you to access AJAX, CSS Transitions, WebSockets and Server Sent Events directly in HTML, using attributes, so you can build modern user interfaces with the simplicity and power of hypertext”. ... More simply, Gross described htmx as attempting “to stay within the original model of the web, using HTML and Hypermedia concepts, rather than asking developers to write a lot of JavaScript.” Somewhat amusingly for this discussion, htmx is a JavaScript library — but in keeping with this simplistic approach, it is dependency-free and frontend developers using htmx do not need to write JavaScript to achieve a similar functionality. 


How to Become a Serverless Developer in 2022

When building a solution it is possible to do it all in the AWS Console. That's how I started my AWS journey. The issue is that it is not controllable, manageable or scalable. If you want to copy this setup to another account (separate dev and prod accounts) you have to remember all the steps you've done. Working with multiple team members can get messy. That is why it's helpful to use a framework to allow us to write Infrastructure-as-Code (IaC). This allows us to use Git for version control. This makes working as a team much easier, enables multi-environment deployments, even continuous integration and deployment. All things that are required when running production workloads ... I would recommend starting with a personal project that you use just for practicing using new services in. That way you don't have to worry about breaking things and you can focus on how the service is working. You can now start using it in production apps and this is where you'll learn a lot about the details of a service. 


From monolith to microservices: How applications evolve

A microservices-oriented application (MOA) addresses the shortcomings inherent in the monolithic application design. As described earlier, monolithic applications are hard to maintain and upgrade. Due to the tight coupling that's implicit in a monolithic application's construction, even making a small change can create unforeseen problems that can cascade throughout the application stack. On the other hand, MOAs are loosely coupled, some say to an extreme. According to the five principles described in the previous article of this series, a microservice is an entirely independent unit of computing. It has a distinct presence on the network and carries its own data. It's completely and independently responsible for its own well-being. This means that as long as changes in its public interface do not affect current consumers of the service, an MOA can be changed independently of any other microservice in the MOA. Figure 4 illustrates an MOA that is a transformation of the monolithic application described previously. Notice that each microservice has its own IP address and port assignment.


TinyML is bringing neural networks to small microcontrollers

There have been multiple efforts to shrink deep neural networks to a size that fits on small-memory computing devices. However, most of these efforts are focused on reducing the number of parameters in the deep learning model. For example, “pruning,” a popular class of optimization algorithms, compress neural networks by removing the parameters that are not significant in the model’s output. The problem with pruning methods is that they don’t address the memory bottleneck of the neural networks. Standard implementations of deep learning libraries require an entire network layer and activation maps to be loaded into memory. Unfortunately, classic optimization methods don’t make any significant changes to the early layers of the network, especially in convolutional neural networks. This causes an imbalance in the size of different layers of the network and results in a “memory-peak” problem: Even though the network becomes lighter after pruning, the device running it must have as much memory as the largest layer. 


The evolution of security analytics

The third generation of security analytics technologies brings us to the current day, where machine learning, behavioral analysis and customization are driving innovation. There are now SIEM products that allow organizations to use their existing data lakes, rather than forcing customers to use proprietary ones. And some solutions have opened their analytics, enrichment, and machine learning models so users can better understand them and modify as needed. Today, powerful algorithms find patterns in data, set baselines and identify outliers. There’s also a greater focus on identifying anomalous behavior (a user taking suspicious actions) and on prioritizing and ranking the risk of alerts based on contextual information like data from Sharepoint or IAM systems. For example, a user accessing source code with legitimate credentials might be a low-priority alert at best, but that user doing so in the middle of the night for the first time in weeks from a suspicious location should trigger a high-priority alert.


Vulnerable AWS Lambda function – Initial access in cloud attacks

From a security perspective, due to its nature to be managed by the cloud provider but still configurable by the user, even the security concerns and risks are shared between the two actors. Since the user doesn’t have control over the infrastructure behind a specific Lambda function, the security risks on the infrastructure underneath are managed directly by the cloud provider. ... In order to successfully mitigate this scenario, we can act on different levels and different features. In particular, we could: Disable the public access for the S3 bucket, so that it will be accessible just from inside and to the users who are authenticated into the cloud account; Check the code used inside the lambda function, to be sure there aren’t any security bugs inside it and all the user inputs are correctly sanitized following the security guidelines for writing code securely; Apply the least privileges concept in all the AWS IAM Roles applied to cloud features to avoid unwanted actions or possible privilege escalation paths inside the account.


Farming 3.0: How AI, IoT and Mobile Apps Are driving the AgriTech Revolution

Artificial Intelligence (AI)-led data points will be a crucial deciding factor for farming in the coming decades. AI led precision agriculture and farm management, pest prevention, agricultural robots, automated weeding and crop quality identification will help improve operational efficiency through unified supply chain and make farming smart, predictive and intelligent. AI is also playing a crucial role in symptom identification in animal husbandry space and helps is quicker diagnosis so that livestock doesn’t get impacted by a large factor and any major outbreaks can be stopped early. To take the complete benefit of AI driven tech, Indian Agricultural sector needs to solve two problems: have better digital infrastructure in rural areas and have effective data practices. Smart apps are the next frontier of development in farming. As the number of agri tech start-ups grow, there is a proliferation of mobile based smart apps in the whole agricultural ecosystem. 


Open source developers, who work for free, are discovering they have power

This system’s inequity is often revealed when there’s a widespread security breach, such as the Log4shell vulnerabilities that emerged in the Log4j Java library in December 2021, triggering a slew of critical security vulnerability bulletins that affected some of the largest companies in the world. The developers of the affected library were forced to work around the clock to mitigate the problems, without compensation or much acknowledgement that their work had been used for free in the first place. CURL’s developer experienced similar behavior, with companies depending on his projects demanding he fly out to help them when they faced trouble with their code, despite not paying him for his services. As a result, it shouldn’t be a surprise that some open source developers are beginning to realize they wield outsized power, despite the lack of compensation they receive for their work, because their projects are used by some of the largest, most profitable companies in the world.


The Drawbacks of a SOAR

SOARs are great at automatically detecting, assessing, and helping to mitigate security threats. But threat detection, assessment, and mitigation are only one element of a broader cybersecurity strategy. Defining a total security strategy also requires efforts like determining where the greatest cybersecurity risks to your business lie, optimizing your security posture (which SOARs don’t really do), and ensuring that security is a priority across the organization, not just for security engineers. Without these insights, you don’t know how to prioritize threats, how to assess the impact of breaches, and so on. Over-reliance on SOARs alone, then, leaves businesses at risk of focusing too much on the operational components of security (like incident detection and response) and not enough on the broader strategy that forms the foundation for effective security operations. ... The fact that SOARs cater mostly to security experts also means that they do a poor job of enforcing a security-centric culture across the organization.



Quote for the day:

"Every great leader has incredible odds to overcome." -- Wayde Goodall