Daily Tech Digest - July 22, 2022

Can automated test tools eliminate QA?

The traditional quality assurance process is multi-step and requires at least two types of software testers: The first tester exercises data edit and processing functions in applications, and they ensure that all of these processes are working correctly. The second QA tester is more familiar with the business’s needs and how the application should address them. This tester is usually savvy about application technical details as well as the business systems with which the application is going to interact. But there’s more to QA than just these two front-running functions. Applications must be integration-tested to ensure that they interact and exchange data with all of the different systems and data that they work with. They must also be moved to application staging areas where they can be regression tested. This ensures that they don’t break any other existing software with which they interface and that they can run the maximum amount of transactions for which they were designed in production. From an IT standpoint, applications must pass through all of these hurdles before they can go live. 


The downside of digital transformation: why organisations must allow for those who can’t or won’t move online

Through our current research we find the reality of a digitally enabled society is, in fact, far from perfect and frictionless. Our preliminary findings point to the need to better understand the outcomes of digital transformation at a more nuanced, individual level. Reasons vary as to why a significant number of people find accessing and navigating online services difficult. And it’s often an intersection of multiple causes related to finance, education, culture, language, trust or well-being. Even when given access to digital technology and skills, the complexity of many online requirements and the chaotic life situations some people experience limit their ability to engage with digital services in a productive and meaningful way. The resulting sense of disenfranchisement and loss of control is regrettable, but it isn’t inevitable. Some organisations are now looking for alternatives to a single-minded focus on transferring services online. Other organisations are considering partnerships with intermediaries who can work with individuals who find engaging with digital services difficult.


Authentic leadership: Building an organization that thrives

Becoming an authentic leader takes a lot of self-reflection and self-awareness. You’ll need to work to understand yourself and others, using empathy and compassion as your driving force. For examples of authentic leadership in the tech industry, you can look to former CEO of Apple Steve Jobs, former CEO of GE Jack Welch, former CEO of Xerox Anne Mulcahy, and former CEO of IBM Sam Palmisano. These leaders are all known for their authentic leadership styles that helped them drive business success. To become an authentic leader, you’ll need to embark on a path of self-discovery, establish a strong set of values and principles that will guide you in your decision-making, and be completely honest with yourself about who you are. An authentic leader isn’t afraid to make mistakes or to own up to mistakes when they happen. You’ll need to make sure you’re someone who takes accountability, maintains calm under pressure, and can be vulnerable with coworkers and employees. It’s important to know your own strengths and weaknesses as an authentic leader and to identify how you cope with success, failure, and setbacks. 


Reporting to build trust: A framework

Whether you’re preparing an integrated annual report or a stand-alone sustainability report, the publication has to be informed by steps one and two. It’s also critical to put the right resources in place, in terms of both time and people, along with the right incentives and the right oversight. Companies can truly be confident in what they report only when it is subject to board oversight, relevant to the company’s strategy, and has the right governance, systems and controls in place to measure progress towards targets and plans. Many large companies that have teams of hundreds working on financial reporting often have only a handful of people working on sustainability reporting. Even with the best intentions, less-resourced areas have a higher potential to miss something that turns out to be critically important. The business world’s financial reporting capabilities have been built over 170 years. When it comes to sustainability reporting, we need to move quickly to build the right capabilities—using what we’ve learned from financial reporting. And if sustainability reporting is to be on par with financial reporting for informing resource allocation decisions, it needs to be just as robust and relevant. 


Six reasons successful leaders love questions

Comparing questions to dreams is Straus’s way of saying that questions hold the key to better understanding the subconscious dimensions of the person asking the questions. It can be extremely difficult to understand why employees think the way they do, and how to help them change their mindset and behavior if required. It then stands to reason that questions might also help leaders better understand the culture and habits of their organization. In his 1988 article, “Toward a History of the Question,” Dutch philosopher C.E.M. Struyker Boudier writes, “In and by way of his questions the human being can reach out to the divine, and likewise degrade himself to the demonic inferno of evil.” Questioning forces people to the line between good and bad, yes and no, pro and con. Asking questions is closely related to making a choice. We cannot address everything at once, so to ask a question, we must decide what to focus on and how. We have the choice to take an approach that is optimistic or pessimistic, abstract or concrete, individual or collective, broad or narrow, past- or future-oriented, etc. 


Discovering the Versatility of OpenEBS

OpenEBS provides storage for stateful applications running on Kubernetes; including dynamic local persistent volumes (like the Rancher local path provisioner) or replicated volumes using various "data engines". Similarly to Prometheus, which can be deployed on a Raspberry Pi to monitor the temperature of your beer or sourdough cultures in your basement, but also scaled up to monitor hundreds of thousands of servers, OpenEBS can be used for simple projects, quick demos, but also large clusters with sophisticated storage needs. OpenEBS supports many different "data engines", and that can be a bit overwhelming at first. But these data engines are precisely what makes OpenEBS so versatile. There are "local PV" engines that typically require little or no configuration, offer good performance, but exist on a single node and become unavailable if that node goes down. And there are replicated engines that offer resilience against node failures. Some of these replicated engines are super easy to set up, but the ones offering the best performance and features will take a bit more work.


Cyber Resiliency: What It Is and How To Build It

Creating a cyber-resilience plan requires buy-in and input from all parts of the organization, including finance, IT, and operations. “It’s important that departments work together to classify information and risk, as well as to determine where to put controls and where responsibilities lie,” Piker says. “Once a plan has been agreed upon, a budget must be carved out to fund the actual implementation of the plan.” It's important to engage the entire organization. “This is not just a technical issue under the control of a CIO or CISO,” Adkins says. “Your employees and vendors can play a critical role in spotting potential attacks to limit their impact.” Additionally, with the continuing trend toward remote work, employee cyber awareness and training is more important than ever. “This means formal policies, training, exercises simulation, and ongoing analysis of risks,” Adkins says. Adkins advises organizations to use tabletop exercises to test incident practices and times. “It's much easier to fix a flaw in your planning and processes when you’re not in the middle of a crisis,” he says. 


How kitemarks are kicking off IoT regulation

Interestingly, all those we have seen apply for the scheme have chosen to go for Gold because they want to be seen to be adhering to the highest levels and it’s been attracting some big international consumer brands. The smaller players that previously had difficulty understanding and navigating the red tape involved in the Code of Practice/ETSI have also valued the guidance and human touch of an assessor. The theory is that the product assurance scheme will spur compliance ahead of the PSTI, making the transition that much easier for the IoT industry, and the fact that many have aimed high suggests the approach is working. Manufacturers like the visibility conferred by the badge, which then becomes a differentiator in the marketplace, as well as ensuring future compliance. It’s for these reasons that many watching the assurance rollout with interest. IoT kitemark schemes vary internationally, from labels that denote compliance with a set of cybersecurity criteria, to a single label that attests basic security features are provided, to several tiers or even a label that lists cybersecurity information about the IoT device.


4 tips for leading remote IT teams

Traditional enterprises tend to have a “we will train our employees only as much as we have to” mentality. However, this approach will make your employees more likely to seek other opportunities where they feel more valued and prepared. Of course, there is always the risk of employees leaving with their newfound skills, but having undertrained employees can be worse for your business and the organization. Set aside a generous annual budget for training and development and help map out a personalized training path for each employee. This is critical to employee happiness and long-term business planning. These plans should also demonstrate growth opportunities that benefit each employee – not just the organization. In-person training is great, but don’t underestimate the value of virtual training. While a personal connection with instructors can often provide more knowledge and attention, the convenience of virtual training makes it a popular alternative these days. Encourage your employees to explore training opportunities where they’re located.


How Microcontainers Gain Against Large Containers

A microcontainer is an optimized container modified toward better efficiency. It still contains all the files to provide more scaling, isolation, and parity to the software application. However, it is an improved container, with an optimized number of files kept in the image. Important files left in the microcontainer are shell, package manager, and standard C library. In parallel, there exists a concept of ‘distroless’ in a field of containers, where all the unused files are fully extracted from the image. It is worth emphasizing the distinction between the concept of microcontainer and distroless. Microcontainer still contains unused files, as they are required for the system to stay completed. Microcontainer is based on the same system of operation as the regular container and performs all the same functions, with the only difference that its internal files have been enhanced and its size got smaller due to the improvements done by developers. Microcontainer contains an optimized number of files, so it still includes all files and dependencies required for application run, but in a lighter and smaller format. 



Quote for the day:

"The first task of a leader is to keep hope alive." -- Joe Batten

Daily Tech Digest - July 21, 2022

Google Launches Carbon, an Experimental Replacement for C++

While Carbon began as a Google internal project, the development team ultimately wants to reduce contributions from Google, or any other single company, to less than 50% by the end of the year. They ultimately want to hand the project off to an independent software foundation, where its development will be led by volunteers. ... The design wants to release a core working version (“0.1”) by the end of the year. Carbon will be built on a foundation on modern programming principles, including a generics system, that would remove the need to check and recheck the code for each instantiation. Another much needed feature lacking in C++ is memory safety. Memory access bugs are one of the largest culprits of security exploits. Carbon designers will look for ways to better track uninitialized states, design APIs and idioms that support dynamic bounds checks, and build a comprehensive default debug build mode. Over time, the designers plan to build a safe Carbon subset. ... Carbon is for those developers who already have large codebases in C++, which are difficult to convert into Rust. Carbon is specifically what Carruth called a “successor language,” which is built atop of an already existing ecosystem, C++ in this case.


The Cost of Production Blindness

DevOps and SRE are roles that didn’t exist back then. Yet today, they’re often essential for major businesses. They brought with them tremendous advancements to the reliability of production, but they also brought with them a cost: distance. Production is in the amorphous cloud, which is accessible everywhere. Yet it’s never been further away from the people who wrote the software powering it. We no longer have the fundamental insight we took for granted a bit over a decade ago. Yes, and no. We gave up some insight and control and got a lot in return: Stability; Simplicity; and Security. These are pretty incredible benefits. We don’t want to give these benefits up. But we also lost some insight, debugging became harder, and complexity rose. ... Log ingestion is probably the most expensive feature in your application. Removing a single line of log code can end up saving thousands of dollars in ingestion and storage costs. We tend to overlog since the alternative is production issues that we can’t trace to their root cause. We need a middle ground. We want the ability to follow an issue through without overlogging. Developer observability lets you add logs dynamically as needed into production.


UK government introduces data reforms legislation to Parliament

Suggested changes included removing organisations’ requirements to designate data protection officers (DPOs), ending the need for mandatory data protection impact assessments (DPIAs), introducing a “fee regime” for subject access requests (SARs), and removing the requirement to review data adequacy decisions every four years. All of these are now included in the updated Bill in some form. “We now have confirmation of what the UK’s post-GDPR data framework is intended to look like,” said Edward Machin, a senior lawyer in Ropes & Gray’s data, privacy and cyber security practice. ... “The GDPR isn’t perfect and it would be foolish for the UK not to learn from those lessons in its own approach, but it’s walking a tightrope between improvements to the current framework and performative changes for the sake of ripping up Brussels red tape. My initial impressions of the Bill are that the government has struck the balance in favour of business and overlooked some civil society concerns, so I would think that reduced rights and safeguards for individuals will be areas that are targeted for revision before the Bill is finalised.”
 

Hackers can spoof commit metadata to create false GitHub repositories

Researchers identified that a threat actor could tamper with commit metadata to make a repository appear older than it is. Or else, they can deceive developers by promoting the repositories as trusted since reputable contributors are maintaining them. It is also possible to spoof the committer’s identity and attribute the commit to a genuine GitHub account. For your information, with open source software, developers can create apps faster and even skip third-party’s code auditing if they are sure that the source of software is reliable. They can choose GitHub repositories maintained actively, or their contributors are trustworthy. Checkmarx researchers explained in their blog post that threat actors could manipulate the timestamps of the commits, which are listed on GitHub. Fake commits can also be generated automatically and added to the user’s GitHub activity graph, allowing the attacker to make it appear active on the platform for a long time. The activity graph displays activity on private and public repositories, making it impossible to discredit the fake commits.


Hackers turn to cloud storage services in attempt to hide their attacks

The group is widely believed to be linked to the Russian Foreign Intelligence Service (SVR), responsible for several major cyberattacks, including the supply chain attack against SolarWinds, the US Democratic National Committee (DNC) hack, and espionage campaigns targeting governments and embassies around the world. Now they're attempting to use legitimate cloud services, including Google Drive and Dropbox – and have already used this tactic as part of attacks that took place between May and June this year. The attacks begin with phishing emails sent out to targets at European embassies, posing as invites to meetings with ambassadors, complete with a supposed agenda attached as a PDF. The PDF is malicious and, if it worked as intended, it would call out to a Dropbox account run by the attackers to secretly deliver Cobalt Strike – a penetration-testing tool popular with malicious attackers – to the victim's device. However, this initial call out was unsuccessful earlier this year, something researchers suggest is down to restrictive policies on corporate networks about using third-party services.

 

How Zero Trust can stop the catastrophic outcomes of cyberattacks on critical infrastructure

The impending necessity of Zero Trust should be recognised by every government and CNI provider around the world if they are to have any hopes of mitigating sophisticated attacks like ransomware. Critical Infrastructure is the backbone of a country’s economy and social order. It is impossible to maintain a sustainable society when sectors like emergency healthcare, energy distribution, food and agriculture, education, and financial services are constantly under disruptive threats. In May 2021, the US government issued an executive order for federal government agencies, to improve their cybersecurity postures and recommended moving toward a Zero Trust architecture as the solution. Following this executive order, the Pentagon launched a Zero Trust office in December 2021 and in January 2022, President Biden further emphasised the urgency of moving to a Zero Trust architecture by mandating all government agencies to achieve specific Zero Trust goals by the end of the Fiscal Year 2024.


Transparency in the shadowy world of cyberattacks

Focusing on the fundamentals of software security is in some ways more important to raise all of us above the level of insecurity we see today. We curate and use threat intelligence to protect billions of users–and have been doing so for some time. But you need more than intelligence, and you need more than security products–you need secure products. Security has to be built in, not just bolted on. Aurora showed us that we (and many in the industry) were doing cybersecurity wrong. Security back then was often “crunchy on the outside, chewy in the middle.” Great for candy bars, not so great for preventing attacks. We were building high walls to keep bad actors out, but if they got past those walls, they had wide internal access. The attack helped us recognize that our approach needed to change–that we needed to double down on security by design. We needed a future-oriented network, one that reflected the openness, flexibility, and interoperability of the internet, and the way people and organizations were already increasingly working. In short, we knew that we had to redesign security for the Cloud.


The importance of secure passwords can’t be emphasized enough

Mobile phones are a main and often overlooked concern. We found that 30% of respondents do not use antivirus on their phones, meaning they are not properly securing their devices. This is especially a concern as the demographic most often on their phones are also the ones who are less worried about online threats and vulnerabilities. Password managers, passwords stored in an electronic file and or in physical format are used most frequently for work devices and least frequently for personal phones. The Autofill option and password managers are used most often by 25-44-year-olds and hard format is used more by those between 55-65. But even if work accounts are secure, that doesn’t mean that sensitive information from work doesn’t carry over onto personal phones. Email and communication apps connected to work accounts are often downloaded onto personal devices, and if someone uses the same passwords across accounts, their personal devices being compromised means their work ones are as well.


Unlocking the potential of AI to increase customer retention

A true AI-fuelled CRM goes beyond simple automation. To provide real benefit, AI must aggregate data from multiple different sources — including in house-sales, marketing, and service tools. It needs to break down organisational silos to identify patterns in interactions and offer deeper customer insights. Some feel they don’t necessarily have enough primary data to build effective predictive models. There are vast amounts of organisational data generated around a single customer or prospect. The trick is to leverage a CRM that understands and captures all of these interactions in a format that can fuel AI initiatives. By breaking down the silos between business units and integrating all of the valuable data that they hold, organisations will be able to benefit from the most advanced predictive models. This is often more challenging than it should be to implement. Business systems are typically good at providing a snapshot of an organisation on any given day, but they aren’t usually as good at gathering historical information. 


Burnout: 3 steps to prevent it on your team

Company culture doesn’t just happen. Leaders must actively maintain and shape it to identify ongoing opportunities that empower employees to support and contribute to it. Employee contributions can be as small as internal pulse surveys or as large as designing new groups or initiatives. Think about creating a club to encourage the workforce to participate in the hiring process and weigh in on how candidates would mesh with internal teams. This engagement would directly shape how the organization operates and builds positive working environments for employees – no matter the physical or remote work setting. By opening the door for employees to get involved and provide input, leaders can identify signs of fatigue earlier, address pain points before employees reach the pinnacle of exhaustion, and create a community that motivates and engages the workforce. ... Too often, leaders view benefits as the silver bullet to burnout. But benefits alone won’t cure feelings of burnout. If your workforce is giving direct feedback on areas that need improvement, simply listening is not enough. Take action to meet these needs and make your actions known.



Quote for the day:

"Take time to deliberate; but when the time for action arrives, stop thinking and go in." - - Andrew Jackson

Daily Tech Digest - July 20, 2022

CIOs contend with rising cloud costs

“A lot of our clients are stuck in the middle,” says Ashley Skyrme, senior managing director and leader of the Global Cloud First Strategy and Consulting practice at Accenture. “The spigot is turned on and they have these mounting costs because cloud availability and scalability are high, and more businesses are adopting it.” And as the migration furthers, cloud costs soon rank second — next only to payroll — in the corporate purse, experts say. The complexity of navigating cloud use and costs has spawned a cottage industry of SaaS providers lining up to help enterprises slash their cloud bills. ... “Cloud costs are rising,” says Bill VanCuren, CIO of NCR. “We plan to manage within the large volume agreement and other techniques to reduce VMs [virtual machines].” Naturally, heavy cloud use is compounding the costs of maintaining or decommissioning data centers that are being kept online to ensure business continuity as the migration to the cloud continues. But more significant to the rising cost problem is the lack of understanding that the compute, storage, and consumption models on the public cloud are varied, complicated, and often misunderstood, experts say.


How WiFi 7 will transform business

In practice, WiFi 7 might not be rolled out for another couple of years — especially as many countries have yet to delicense the new 6GHz spectrum for public use. However, it is coming, and so it’s important to plan for this development as plans could progress quicker than we first thought. In the same way as bigger motorways are built and traffic increases to fill them, faster, more stable WiFi will encourage more usage & users, and to quote the popular business mantra: “If you build it…they will come….”. WiFi 7 is a significant improvement over all the past WiFi standards. It uses the same spectrum chunks as WiFi 6/6e, and can deliver data more than twice as fast. It has a much wider bandwidth for each channel as well as a raft of other improvements. It is thought that WiFi 7 could deliver speeds of 30 gigabits per second (Gbps) to compatible devices and that the new standard could make running cables between devices completely obsolete. It’s now not necessarily about what you can do with the data, but how you actually physically interact with it. 


How to Innovate Fast with API-First and API-Led Integration

Many have assembled their own technologies as they have tried to deliver a more productive, cloud native platform-as-a-shared-service that different teams can use to create, compose and manage services and APIs. They try to combine integration, service development and API-management technologies on top of container-based technologies like Docker and Kubernetes. Then they add tooling on top to implement DevOps and CI/CD pipelines. Afterward comes the first services and APIs to help expose legacy systems via integration, for example. When developers have access to such a platform within their preferred tools and can reuse core APIs instead of spending time on legacy integration, it means they can spend more time on designing and building the value-added APIs faster. At best, a group can use all the capabilities because it spreads the adoption of best practices, helps get teams ramped up faster and makes them deliver quicker. But at the very least, APIs should be shared and governed together.


Using Apache Kafka to process 1 trillion inter-service messages

One important decision we made for the Messagebus cluster is to only allow one proto message per topic. This is configured in Messagebus Schema and enforced by the Messagebus-Client. This was a good decision to enable easy adoption, but it has led to numerous topics existing. When you consider that for each topic we create, we add numerous partitions and replicate them with a replication factor of at least three for resilience, there is a lot of potential to optimize compute for our lower throughput topics. ... Making it easy for teams to observe Kafka is essential for our decoupled engineering model to be successful. We therefore have automated metrics and alert creation wherever we can to ensure that all the engineering teams have a wealth of information available to them to respond to any issues that arise in a timely manner. We use Salt to manage our infrastructure configuration and follow a Gitops style model, where our repo holds the source of truth for the state of our infrastructure. To add a new Kafka topic, our engineers make a pull request into this repo and add a couple of lines of YAML. 


Load Testing: An Unorthodox Guide

A common shortcut is to generate the load on the same machine (i.e. the developer’s laptop), that the server is running on. What’s problematic about that? Generating load needs CPU/Memory/Network Traffic/IO and that will naturally skew your test results, as to what capacity your server can handle requests. Hence, you’ll want to introduce the concept of a loader: A loader is nothing more than a machine that runs e.g. an HTTP Client that fires off requests against your server. A loader sends n-RPS (requests per second) and, of course, you’ll be able to adjust the number across test runs. You can start with a single loader for your load tests, but once that loader struggles to generate the load, you’ll want to have multiple loaders. (Like 3 in the graphic above, though there is nothing magical about 3, it could be 2, it could be 50). It’s also important that the loader generates those requests at a constant rate, best done asynchronously, so that response processing doesn’t get in the way of sending out new requests. ... Bonus points if the loaders aren’t on the same physical machine, i.e. not just adjacent VMs, all sharing the same underlying hardware. 


Open-Source Testing: Why Bug Bounty Programs Should Be Embraced, Not Feared

There are two main challenges: one around decision-making, and another around integrations. Regarding decision-making, the process can really vary according to the project. For example, if you are talking about something like Rails, then there is an accountable group of people who agree on a timetable for releases and so on. However, within the decentralized ecosystem, these decisions may be taken by the community. For example, the DeFi protocol Compound found itself in a situation last year where in order to agree to have a particular bug fixed, token-holders had to vote to approve the proposal. ... When it comes to integrations, these often cause problems for testers, even if their product is not itself open-source. Developers include packages or modules that are written and maintained by volunteers outside the company, where there is no SLA in force and no process for claiming compensation if your application breaks because an open-source third party library has not been updated, or if your build script pulls in a later version of a package that is not compatible with the application under test.


3 automation trends happening right now

IT automation specifically continues to grow as a budget priority for CIOs, according to Red Hat’s 2022 Global Tech Outlook. While it’s outranked as a discrete spending category by the likes of security, cloud management, and cloud infrastructure, in reality, automation plays an increasing role in each of those areas. ... While organizations and individuals automate tasks and processes for a bunch of different reasons, the common thread is usually this: Automation either reduces painful (or simply boring) work or it enables capabilities that would otherwise be practically impossible – or both. “Automation has helped IT and engineering teams take their processes to the next level and achieve scale and diversity not possible even a few years ago,” says Anusha Iyer, co-founder and CTO of Corsha. ... Automation is central to the ability to scale – quickly, reliably, and securely – distributed systems, whether viewed from an infrastructure POV (think hybrid cloud and multi-cloud operations), application architecture POV, security POV, or though virtually any other lens. Automation is key to making it work.


CIO, CDO and CTO: The 3 Faces of Executive IT

Most companies lack experience with the CDO and CTO positions. This makes these positions (and those filling them) vulnerable to failure or misunderstanding. The CIO, who has supervised most of the responsibilities that the CDO and CTO are being assigned, can help allay fears, and benefit from the cooperation, too. This can be done by forging a collaborative working partnership with both the CDO and CTO, which will need IT’s help. By taking a pivotal and leading role in building these relationships, the CIO reinforces IT’s central role, and helps the company realize the benefits of executive visibility of the three faces of IT: data, new technology research, and developing and operating IT business operations. Many companies opt to place the CTO and CDO in IT, where they report to the CIO. Sometimes this is done upfront. Other times, it is done when the CEO realizes that he/she doesn't have the time or expertise to manage three different IT functions.. This isn't a bad idea since the CIO already understands the challenges of leveraging data and researching new technologies.


Log4j: The Pain Just Keeps Going and Going

Why is Log4j such a persistent pain in the rump? First, it’s a very popular, open source Java-based logging framework. So it’s been embedded into thousands of other software packages. That’s no typo. Log4j is in thousands of programs. Adding insult to injury, Log4j is often deeply embedded in code and hidden from view due to being called in by indirect dependencies. So, the CSRB stated that “Defenders faced a particularly challenging situation; the vulnerability impacted virtually every networked organization, and the severity of the threat required fast action.” Making matters worse, according to CSRB, “There is no comprehensive ‘customer list’ for Log4j or even a list of where it is integrated as a subsystem.”  ... The pace, pressure, and publicity compounded the defensive challenges: security researchers quickly found additional vulnerabilities in Log4j, contributing to confusion and ‘patching fatigue’; defenders struggled to distinguish vulnerability scanning by bona fide researchers from threat actors, and responders found it difficult to find authoritative sources of information on how to address the issues,” the CSRB said.


Major Takeaways: Cyber Operations During Russia-Ukraine War

The operational security expert known as the grugq says Russia did disrupt command-and-control communications - but the disruption failed to stymie Ukraine's military. The government had reorganized from a "Soviet-style" centralized command structure to empower relatively low-level military officers to make major decisions, such as blowing up runways at strategically important airports before they were captured by Russian forces. Lack of contact with higher-ups didn't compromise the ability of Ukraine's military to physically defend the country. ... Another surprising development is the open involvement of Western technology companies in Ukraine's cyber defense, WithSecure's Hypponen says. "I'm surprised by the fact that Western technology companies like Microsoft and Google are there on the battlefield, supporting Ukraine against governmental attacks from Russia, which is again, something we've never seen in any other war." Western corporations aren't alone, either. Kyiv raised a first-ever volunteer "IT Army," consisting of civilians recruited to break computer crime laws in aid of the country's military defense.



Quote for the day:

"Leadership is a way of thinking, a way of acting and, most importantly, a way of communicating." -- Simon Sinek

Daily Tech Digest - July 19, 2022

Open source isn’t working for AI

It’s hard to trust AI if we don’t understand the science inside the machine. We need to find ways to open up that infrastructure. Loukides has an idea, though it may not satisfy the most zealous of free software/AI folks: “The answer is to provide free access to outside researchers and early adopters so they can ask their own questions and see the wide range of results.” No, not by giving them keycard access to Facebook’s, Google’s, or OpenAI’s data centers, but through public APIs. It’s an interesting idea that just might work. But it’s not “open source” in the way that many desire. That’s probably OK. ... Because open source is inherently selfish, companies and individuals will always open code that benefits them or their own customers. Always been this way, and always will. To Loukides’ point about ways to meaningfully open up AI despite the delta between the three AI giants and everyone else, he’s not arguing for open source in the way we traditionally did under the Open Source Definition. Why? Because as fantastic as it is (and it truly is), it has never managed to answer the cloud open source quandary—for both creators and consumers of software—that DiBona and Zawodny laid out at OSCON in 2006.


Botnet malware disguises itself as password cracker for industrial controllers

What's weird is that the malware also deploys code to check the clipboard contents for cryptocurrency wallet addresses, and silently rewrites those details to point to another wallet so as to steal people's funds. Remember, this is running on PCs normally connected to industrial equipment, so perhaps the crooks behind this caper just grabbed some generic nasty to use. "Dragos assesses with moderate confidence the adversary, while having the capability to disrupt industrial processes, has financial motivation and may not directly impact Operational Technology (OT) processes," the team wrote. The Sality malware family has been around for almost two decades, first being detected in 2003, and can be commanded by its masterminds to perform other malicious actions, such as attacking routers, F-Secure analysts wrote in a report. Sality maintains persistence on the host PC through process injection and file infection, and abusing Windows' autorun functionality to spread copies of itself over USB, network shares, and external storage drives, according to Dragos.


Rescale and Nvidia partner to automate industrial metaverse

The new partnership between Rescale and Nvidia will allow enterprises to connect workflows between Rescale’s existing catalog of engineering and scientific containers, Nvidia’s extensive NGC offerings, and enterprises’ standard containers of their own models and supporting software. This new containerized approach to engineering software means teams can specify the software libraries and configurations that reflect industry best practices. The recent Nvidia and Siemens partnership is an ambitious effort to bring together physics-based digital models and real-time AI. Rescale’s announcement with Nvidia enhances this partnership, as accelerated computing combined with high-performance computing is the foundation that powers these use cases. For example, enterprises can take advantage of Nvidia’s work on Modulus, which uses AI to speed up physics simulations hundreds or thousands of times. Siemens estimates that integrating physics and AI models could help save the power industry $1.7 billion in reduced turbine maintenance. The partnership could also make it easier for companies to integrate other apps that work on these tools.


Uber Files leak shows why India’s approach to security and privacy matters

In the Uber Files investigation led by The Guardian under the International Consortium of Investigative Journalists or ICAJ, the leaked documents provide evidence of law breaking, lobbying world leaders, using stealth technologies to evade raids and opaque algorithms deployed by the Uber Corporation in 2012-16. In the documents, there have been instances where Uber executives have sanctioned the use of stealth technologies like the ‘Kill Switch’ to evade regulations and efforts of investigative agencies for a fair probe in India, Belgium and other countries. On similar lines, reports indicate that e-commerce giant Amazon spent more than Rs 8,000 crore in India on legal fees in 2018-20. There are numerous incidents like these where regulators are in a Catch-22 situation of regulation and innovation. The Uber Files tell how technology platforms deploy a multi-pronged strategy to subvert public opinion with sponsored academic work, allying with public officials, and wilfully stifling investigations of law enforcement agencies to dodge regulatory efforts for better transparency, accountability and public scrutiny of their architecture.


Is Microsoft’s VS Code really open source?

“Microsoft modifies VS Code in a way that a non-Microsoft VS Code fork can’t use extensions from the official Microsoft VS Code store. Not only that, some of the VS Code extensions developed and released by Microsoft will only work in the VS Code released by Microsoft and won’t work on non-Microsoft VS Code forks,” mentioned Ranatunge in his blog post. Microsoft has made similar moves in the past. It modified the open-source cross-platform IDE MonoDevelop as Visual Studio for Mac. The Visual Studio for Mac has three versions- for students, professionals and enterprises. While the students’ version is free and supports classroom learning, individual developers and small companies must log in via IDE to access the other versions. In 2021, Microsoft abruptly removed the Hot Reload functionality from the open-source .NET SDK, only to revoke it later as it had enraged the .NET community. As stated, Microsoft follows an open-core model for VS code. Therefore, developers who want access to the full open source code that is MIT licensed will have to download the code from the repository and then build the VS code on their own.


Open source security needs automation as usage climbs amongst organisations

"OSS is not insecure per se…the challenge is with all the versions and components that may make up a software project," he explained. "It is impossible to keep up without automation and prioritisation." He noted that the OSS community was responsive in addressing security issues and deploying fixes, but organisations tapping OSS would have to navigate the complexity of ensuring their software had the correct, up-to-date codebase. This was further compounded by the fact that most organisations would have to manage many projects concurrently, he said, stressing the importance of establishing a holistic software security strategy. He further pointed to the US National Institute of Standards and Technology (NIST), which offered a software supply chain framework that could aid organisations in planning their OSS security response. Asked if regulations were needed to drive better security practices, Liu said most companies saw cybersecurity as a cost and would not want to address it actively in the absence of any incentive.


How To Minimize the Impacts of Shadow IT on Your Business

Organizations looking to manage and mitigate the negative impacts of shadow IT must first perform an internal audit. Cloud security applications such as Microsoft’s Cloud App Security detect unsanctioned usage of applications and data. But detecting shadow IT is only one part of the equation. Companies should work to address the root causes. This may include optimizing communications between departments – particularly the IT team and other departments. If one department discovers a software solution that may be beneficial, they should feel comfortable approaching the IT team. CIOs and IT staff should develop processes that allow them to streamline software assessment and procurement. They should be able to give in-depth reasons why a particular tool suggested by a non-IT employee may be impracticable. Additionally, it is recommended that IT staff suggest a better alternative if they reject a proposed tool. Organizations should consider training non-IT staff in cybersecurity literacy and awareness. 


Gatling vs JMeter - What to Use for Performance Testing

There's a saying that every performance tester should know: "lies, damn lies, and statistics." If they don't know it yet, they will surely learn it in a painful way. A separate article could be written about why this sentence should be the mantra in the performance test area. In a nutshell: median, arithmetic mean, standard deviation are completely useless metrics in this field (you can use them only as an additional insight). You can get more detail on that in this great presentation by Gil Tene, CTO and co-founder at Azul. Thus, if the performance testing tool only provides this static data, it can be thrown right away. The only meaningful metrics to measure and to compare performance are the percentiles. However, you should also use them with some suspicion about how they were implemented. Very often the implementation is based on the arithmetic mean and standard deviation, which, of course, makes them equally useless. ... Another approach would be to check the source code of implementation yourself. I regret that most of the performance test tools documentation does not cover how percentiles are calculated. 


BlackCat Adds Brute Ratel Pentest Tool to Attack Arsenal

Sophos investigators found that the attacker used commercially available tools such as AnyDesk and TeamViewer and also installed nGrok, an open-source remote access tool. "The attackers also used PowerShell commands to download and execute Cobalt Strike beacons on some machines, and a tool called Brute Ratel, which is a more recent pen-testing suite with Cobalt Strike-like remote access features," Brandt says. Sophos researchers found that the Brute Ratel binary was installed as a Windows service named wewe in an affected machine. One of the bigger challenges for the Sophos investigators was that some of the targeted organizations were running the same servers that were compromised using the Log4j vulnerability. Apart from ransoming systems on the network, the threat actors collected and exfiltrated sensitive data from the targets and uploaded large volumes of data to Mega, a cloud storage provider. The attackers used a third-party tool called DirLister to create a list of accessible directories and files, or in some cases used a PowerShell script from a pen tester toolkit, called PowerView.ps1, to enumerate the machines on the network. 


Removing the blind spots that allow lateral movement

One of the biggest challenges of lateral movement detection is its low anomaly factor. Lateral movement attacks exploit the gaps in an organization’s user authentication process. Such attacks tend to remain undetected because the authentication performed by the attacker is essentially identical to the authentication made by a legitimate user. Following the initial “patient zero” compromise, the attacker uses valid credentials to log in to organizational systems or applications. Therefore, the standard IAM infrastructure in place legacy cannot detect any anomaly during this process, which allows attackers to slip through and remain in the network undetected. Another key challenge is the potential mismatch or disparity between endpoint and identity protection aspects. Endpoint protection solutions are mainly focused on detecting anomalies in file and process execution. However, the attacker gains access by exploiting the legitimate authentication infrastructure, utilizing legitimate files and process. Therefore, it doesn’t appear on the radar of endpoint solutions.



Quote for the day:

"Sport fosters many things that are good; teamwork and leadership" -- Daley Thompson

Daily Tech Digest - July 18, 2022

Cyber Safety Review Board warns that Log4j event is an “endemic vulnerability”

According to the report, "The pace, pressure, and publicity compounded the defensive challenges." As a result, researchers found additional vulnerabilities in Log4j, contributing to confusion and "patching fatigue," and "responders found it difficult to find authoritative sources of information on how to address the issues. This frenetic period culminated in one of the most intensive cybersecurity community responses in history." ... The few organizations that responded effectively to the event "understood their use of Log4j and had technical resources and mature processes to manage assets, assess risk, and mobilize their organization and key partners to action. Most modern security frameworks call out these capabilities as best practices." ... A fog still hovers over the event because, "No authoritative source exists to understand exploitation trends across geographies, industries or ecosystems. Many organizations do not even collect information on specific Log4j exploitation, and reporting is still largely voluntary. Most importantly, however, the Log4j event is not over."


DTN’s CTO on combining IT systems after a merger

Enterprises often make strategic errors when combining IT systems following an acquisition, Ewe says. “The number one mistake I see is, ‘Since we acquired you, clearly we win,’” he says. “Just because A bought B, you don’t want to assume that A has better technology than B.” Another common mistake is to go solely by the numbers, picking one company’s IT system over the other’s because it has the highest revenue or profitability, he says: “The issue there is that you’re oversimplifying the process.” Given the investment in time and money necessary to merge two companies’ IT systems, “it’s worthwhile spending an extra few weeks up-front to make a more thorough analysis of which solution or which pieces of which solutions should come together,” Ewe says. Jumping straight in and making a wrong decision can cost more in the long term. Ewe consulted with product and sales management, and with customers, to identify the needs DTN’s single engine would have to satisfy, as well as the use cases it would serve, before evaluating the existing assets against those needs. 


Ransomware and backup: Overcoming the challenges

Recovering data after a ransomware attack is more complex and more risky than recovery from a system outage or natural disaster. The greatest risk is that backups contain undetected ransomware, which then replicate into the production system or recovered systems. This risk is reduced by using air-gapped copies and immutable copies and snapshots, and keeping more copies than would be required for conventional backup alone. This requires a more cautious approach to data recovery, and one that can be at odds with the commercial pressures for short RTOs and recent RPOs. Matters are made more difficult because there are no viable, fool-proof systems that can scan data for ransomware before it is backed up, says Barnaby Mote, managing director at backup specialist Databarracks. “Before ransomware was a thing, replicating data from production systems to DR as quickly as possible was a sound recovery strategy for conventional disasters,” he says. “Now, with ransomware, it has the opposite of the desired effect, rendering recovery systems unusable.”


Continuous Intelligence: Definition, Benefits, and Examples

While humans cannot inspect every possible characteristic and combination in the flood of incoming data, machines can. Complementing analytics that provide precise answers to questions users know to ask, a machine can continuously monitor data in the background to detect unknown correlations and trends that deviate from what would have been expected by the system based on previous observations. This way, companies can identify hidden, but potentially relevant signals in the data. Gartner predicts that by 2022, more than half of major new business systems will incorporate continuous intelligence capabilities. By integrating artificial intelligence (AI)-based continuous intelligence into their day-to-day operations, companies can:Boost efficiency by spending less time sifting through data from a variety of disparate sources; Focus on what really matters for their business; Speed time to action. By automatically inspecting critical business health indicators such as revenue, Web page views, active users, or transaction volume in real time, businesses can accelerate their time to insight and action and better respond to situations before business is impacted.


7 reasons Java is still great

As a longtime Java programmer, it was surprising—astonishing, actually—to watch the language successfully incorporate lambdas and closures. Adding functional constructs to an object-oriented programming language was a highly controversial and impressive feat. So was absorbing concepts introduced by technologies like Hibernate and Spring (JSR 317 and JSR 330, respectively) into the official platform. That such a widely used technology can still integrate new ideas is heartening. Java's responsiveness helps to ensure the language incorporates useful improvements. it also means that developers know they are working within a living system, one that is being nurtured and cultivated for success in a changing world. Project Loom—an ambitious effort to re-architect Java’s concurrency model—is one example of a project that underscores Java's commitment to evolving. Several other proposals currently working through the JCP demonstrate a similar willingness to go after significant goals to improve Java technology. The people working on Java are only half of the story. The people who work with it are the other half, and they are reflective of the diversity of Java's many uses.


Search Here: Ransomware Groups Refine High-Pressure Tactics

Ransomware groups continue to refine the tactics they use to better pressure victims into paying. And they're succeeding. "In recent months, we have seen an increase in the number of ransomware attacks and ransom amounts being paid," the heads of Britain's lead cybersecurity agency and privacy watchdog warned last week in an open letter to the legal industry. The impetus for the alert from Britain's National Cyber Security Center - the public-facing arm of intelligence agency GCHQ - and the Information Commissioner's Office: They're urging solicitors to never advise clients to pay a ransom. Doing so will not lessen any penalties the ICO might levy, helps perpetuate the ransomware business model and could violate U.S. sanctions, they say. But the increase in ransoms being paid speaks to the success of ransomware groups' continuing innovation. Psychological pressure remains a specialty. After infecting systems, many types of ransomware reboot infected PCs to a lock screen that lists the ransom demand, a cryptocurrency wallet address for routing funds and a countdown timer. 


Functional programming is finally going mainstream

For some, using an object-oriented language like Java, JavaScript, or C# for functional programming can feel like swimming upstream. “A language can steer you towards certain solutions or styles of solutions,” says Gabriella Gonzalez, an engineering manager at Arista Networks. “In Haskell, the path of least resistance is functional programming. You can do functional programming in Java, but it’s not the path of least resistance.” A bigger issue for those mixing paradigms is that you can’t expect the same guarantees you might receive from pure functions if your code includes other programming styles. “If you’re writing code that can have side effects, it’s not functional anymore,” Williams says. “You might be able to rely on parts of that code base. I’ve made various functions that are very modular, so that nothing touches them.” Working with strictly functional programming languages makes it harder to accidentally introduce side effects into your code. “The key thing about writing functional programming in something like C# is that you have to be careful because you can take shortcuts and then you’ve got the exact sort of mess you would have if you weren’t using functional programming at all,” Louth says.


Safeguarding the open source model amidst big tech involvement

Two of the main techniques to safeguard open source and its community are through smart licensing tactics and constant innovation. The first technique is to simply switch the project licence from an open source licence to a more restrictive licence. There are two specific licences that can be used to protect against clouds and corporations: AGPL-3 and SSPL — specifically developed by the likes of MongoDB, Elastic and Grafana to protect themselves from AWS. For instance, while many projects shifted away from GPL-style licences towards more permissive forms of licensing, under GPL, contributors are required to make their code available to the open source community; the so-called “copyleft”. This traditional licensing style helps to create a more open, transparent ecosystem. Another way in which open source can safeguard its future is through smart innovations. Constantly innovating in order to satisfy users should be the way forward for the evolution of open source projects and solutions. This would enable companies to maintain their competitive edge and keep up with technological trends. 


5 ways fear can derail your digital transformation strategy

When we confront new work technologies such as a hybrid workplace, virtual meeting rooms, or new software, we tend to resist or avoid them simply because they’re new and we’re not used to them. This creates division. A company looking to offer a hybrid workplace might encounter resistance from employees, managers, and even customers who refuse to recognize this arrangement. What appears to be a simple reluctance to change is actually a deep-seated fear of changing a comfortable status quo. What you can do about this: Offer facts to neutralize fear. People often use their own frame of reference if they are not given something tangible to hold on to. If the change involves new technology, demonstrate the technology. Let them see how it works. If the change is organizational, such as a hybrid workspace, present the facts about how it will work, what will change, and what will stay the same. Listen to and respond to their questions and objections. Humans are dominated by emotion, and logic is always playing catch-up. 


The Four P's of Pragmatically Scaling Your Engineering Organization

Your people aren’t just the heart and soul of the company, they’re the building blocks for its future. When you're growing rapidly it can be tempting to add developers to your team as quickly as possible, but it's important to first consider your company goals while remaining practical about how you’re scaling. This is the key foundation for building the right organization. ... Scaling your processes comes down to practical prioritization. It is crucial to clearly establish processes that balance both short- and long-term wins for the company, beginning with the systems that need to be fixed immediately. Start by instituting a planning process looking at things from both an annual perspective and quarterly, or even monthly– and try not to get bogged down deliberating over a planning methodology in the first stage. ... Scaling the platform is often the biggest challenge organizations face in the hyper-growth phase. But it’s important to remember that building toward a north star doesn’t mean that you’re building the north star. Now is the time to focus on intentional, iterative improvement of the platform rather than implementing sweeping changes to your product.



Quote for the day:

"It is one thing to rouse the passion of a people, and quite another to lead them." -- Ron Suskind

Daily Tech Digest - July 17, 2022

The Shared Responsibility of Taming Cloud Costs

The cost of cloud impacts the bottom line and therefore, cloud cost management cannot be the job of the CIO alone. It’s important to create a culture or framework where managing cloud costs is a shared responsibility among business, product, and engineering teams, and where it’s a consideration throughout the software development process and in IT operations. In order to do just this, it’s important to shift education left. Like many DevOps principles, “shift-left” once had a specific meaning that has become more generalized over time. At its core, the idea of shifting left is to be proactive when it comes to cost management in all management and operational processes. It means empowering developers and making operational considerations a key part of application development. Change management must be connected in the context of cost. If organizations educate and empower developers to understand the impact of cloud cost as software is written, they will reap the benefits of building more cost effective software that improves operational visibility and control.


How AI Regulations Are Shaping Its Current And Future Use

Examining some of the many laws that have been passed in relation to AI, I have identified some of the best practices for both statewide and nationwide regulation. On a national level, it is crucial to both develop public trust in AI as well as have advisory boards to monitor the use of AI. One such example is having specific research teams or committees dedicated to identifying and studying deepfakes. In the U.S., Texas and California have legally banned the use of deepfakes to influence elections, and the EU created a self-regulating Code of Practice on Disinformation for all online platforms to achieve similar results. Another necessity is to have an ethics committee that monitors and advises the use of AI in digitization activities, a practice currently in place in Belgium (pg. 179). Specifically, this committee encourages companies that use AI to weigh the costs and benefits of implementation compared to the systems that will get replaced. Finally, it’s important to promote public trust in AI on a national level.


5 key considerations for your 2023 cybersecurity budget planning

The cost of complying with various privacy regulations and security obligations in contracts is going up, Patel says. “Some contracts might require independent testing by third-party auditors. Auditors and consultants are also raising fees due to inflation and rising salaries,” he says. ... “When an organization is truly secure, the cost to achieve and maintain compliance should be reduced,” he says. Evolving regulatory compliance requirements, especially for those organizations supporting critical infrastructure, require significant support, Chaddock says. “Even the effort to determine what needs to happen can be costly and detract from daily operations, so plan for increased effort to support regulatory obligations if applicable,” he says. ... If paying for such policies comes out of the security budget, CISOs will need to take into consideration the rising costs of coverage and other factors. Companies should be sure to include the cost of cyber insurance over time, and more important the costs associated with maintaining effective and secure backup/restore capabilities, Chaddock says.


CISA pulls the fire alarm on Juniper Networks bugs

The networking and security company also issued an alert about critical vulnerabilities in Junos Space Security Director Policy Enforcer — this piece provides centralized threat management and monitoring for software-defined networks — but noted that it's not aware of any malicious exploitation of these critical bugs. While the vendor didn't provide details about the Policy Enforcer bugs, they received a 9.8 CVSS score, and there are "multiple" vulnerabilities in this product, according to the security bulletin. The flaws affect all versions of Junos Space Policy Enforcer prior to 22.1R1, and Juniper said it has fixed the issues. The next group of critical vulnerabilities exist in third-party software used in the Contrail Networking product. In this security bulletin, Juniper issued updates to address more than 100 CVEs that go back to 2013. Upgrading to release 21.4.0 fixes the Open Container Initiative-compliant Red Hat Universal Base Image container image from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8, the vendor explained in the alert.


HTTP/3 Is Now a Standard: Why Use It and How to Get Started

As you move from one mast to another, from behind walls that block or bounce signals, connections are commonly cut and restarted. This is not what TCP likes — it doesn’t really want to communicate without formal introductions and a good firm handshake. In fact, it turns out that TCP’s strict accounting and waiting for that last stray packet just means that users have to wait around for webpages to load and new apps to download, or a connection timeout to be re-established. So to take advantage of the informality of UDP, and to allow the network to use some smart stuff on-the-fly, the new QUIC (Quick UDP Internet Connections) format got more attention. While we don’t want to see too much intelligence within the network itself, we are much more comfortable these days with automatic decision making. QUIC understands that a site is made up of multiple files, and it won’t blight the entire connection just because one file hasn’t finished loading. The other trend that QUIC follows up on is built-in security. Whereas encryption was optional before (i.e. HTTP or HTTPS) QUIC is always encrypted.


The enemy of vulnerability management? Unrealistic expectations

First and most importantly, you need to be realistic. Many organizations want critical vulnerabilities fixed within seven days. That is not realistic if you only have one maintenance window per month. Additionally, if you do not have the ability to reboot all your systems every weekend, you are setting yourself up for failure. If you only have one maintenance window per month, there is no reason to set a due date on critical vulnerabilities any less than 30 days. For obvious reasons, organizations are nervous about speaking publicly about how quickly they remediate vulnerabilities. One estimate states that the mean time to remediate for private sector organizations is between 60 and 150 days. You can get into that range by setting due dates of 30, 60, 90, and 180 days for severities of critical, high, medium, and low, respectively. Better yet, this is achievable with a single maintenance window each month. As someone who has worked on both sides of this problem, getting it fixed eventually is more important than taking a hard line on getting it fixed lightning fast, and then having it sit there partially fixed indefinitely. Setting an aggressive policy that your team cannot deliver on looks tough.


‘Callback’ Phishing Campaign Impersonates Security Firms

Researchers likened the campaign to one discovered last year dubbed BazarCall by the Wizard Spider threat group. That campaign used a similar tactic to try to spur people to make a phone call to opt-out of renewing an online service the recipient purportedly is currently using, Sophos researchers explained at the time. If people made the call, a friendly person on the other side would give them a website address where the soon-to-be victim could supposedly unsubscribe from the service. However, that website instead led them to a malicious download. ... Researchers did not specify what other security companies were being impersonated in the campaign, which they identified on July 8, they said. In their blog post, they included a screenshot of the email sent to recipients impersonating CrowdStrike, which appears legitimate by using the company’s logo. Specifically, the email informs the target that it’s coming from their company’s “outsourced data security services vendor,” and that “abnormal activity” has been detected on the “segment of the network which your workstation is a part of.”


The next frontier in cloud computing

Terms that are beginning to emerge, such as “supercloud,” “distributed cloud,” “metacloud” (my vote), and “abstract cloud.” Even the term “cloud native” is up for debate. To be fair to the buzzword makers, they all define the concept a bit differently, and I know the wrath of defining a buzzword a bit differently than others do. The common pattern seems to be a collection of public clouds and sometimes edge-based systems that work together for some greater purpose. The metacloud concept will be the single focus for the next 5 to 10 years as we begin to put public clouds to work. Having a collection of cloud services managed with abstraction and automation is much more valuable than attempting to leverage each public cloud provider on its terms rather than yours. We want to leverage public cloud providers through abstract interfaces to access specific services, such as storage, compute, artificial intelligence, data, etc., and we want to support a layer of cloud-spanning technology that allows us to use those services more effectively. A metacloud removes the complexity that multicloud brings these days.


A CIO’s guide to guiding business change

When it comes to supporting business change, the “it depends answer” amounts to choosing the most suitable methodology, not the methodology the business analyst has the darkest belt in. But on the other hand, the idea of having to earn belts of varying hue or their equivalent levels of expertise in several of these methodologies, just so you can choose the one that best fits a situation, might strike you as too intimidating to bother with. Picking one to use in all situations, and living with its limitations, is understandably tempting. If adding to your belt collection isn’t high on your priority list, here’s what you need to know to limit your hold-your-pants-up apparel to suspenders, leaving the black belts to specialists you bring in for the job once you’ve decided which methodology fits your situation best. Before you can be in a position to choose, keep in mind the six dimensions of process optimization: Fixed cost, incremental cost, cycle time, throughput, quality, and excellence. You need to keep these center stage, because: You can only optimize around no more than three of them; the ones you choose have tradeoffs; and each methodology is designed to optimize different process dimensions.


7 Reasons to Choose Apache Pulsar over Apache Kafka

Apache Pulsar is like two products in one. Not only can it handle high-rate, real-time use cases like Kafka, but it also supports standard message queuing patterns, such as competing consumers, fail-over subscriptions, and easy message fan out. Apache Pulsar automatically keeps track of the client's read position in the topic and stores that information in its high-performance distributed ledger, Apache BookKeeper. Unlike Kafka, Apache Pulsar can handle many of the use cases of a traditional queuing system, like RabbitMQ. So instead of running two systems — one for real-time streaming and one for queuing — you do both with Pulsar. It’s a two-for-one deal, and those are always good. ... Well, with Apache Pulsar it can be that simple. If you just need a topic, then use a topic. You don’t have to specify the number of partitions or think about how many consumers the topic might have. Pulsar subscriptions allow you to add as many consumers as you want on a topic with Pulsar keeping track of it all.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.