Daily Tech Digest - July 25, 2022

Digital presenteeism is creating a future of work that nobody wants

While technology has enabled more employees to work remotely – bringing considerable benefits in doing so – it has also facilitated digital presenteeism, Qatalog and GitLab concluded. One solution is to make technology less invasive and "more considerate of the user and completely redesigned for the new way of work, rather than supporting old habits in new environments" – although this may be easier said than done. According to Raud, current solutions require a "radical redesign that is more considerate of the user and prioritizes their objectives, rather than simply capturing our attention." Culture shift is also necessary for async work to become normalized, says Rauf. This comes from the top, and starts with trust: "When leaders send a message to their team, make clear whether or not it needs an immediate response or better yet, schedule updates to go out when people are most likely online. If I message a team member at an odd hour, I prefix a 'for tomorrow' or 'no rush', so they know it's not an urgent issue."


Confronting the risks of artificial intelligence

Because AI is a relatively new force in business, few leaders have had the opportunity to hone their intuition about the full scope of societal, organizational, and individual risks, or to develop a working knowledge of their associated drivers, which range from the data fed into AI systems to the operation of algorithmic models and the interactions between humans and machines. As a result, executives often overlook potential perils (“We’re not using AI in anything that could ‘blow up,’ like self-driving cars”) or overestimate an organization’s risk-mitigation capabilities (“We’ve been doing analytics for a long time, so we already have the right controls in place, and our practices are in line with those of our industry peers”). It’s also common for leaders to lump in AI risks with others owned by specialists in the IT and analytics organizations. Leaders hoping to avoid, or at least mitigate, unintended consequences need both to build their pattern-recognition skills with respect to AI risks and to engage the entire organization so that it is ready to embrace the power and the responsibility associated with AI.


The AIoT Revolution: How AI and IoT Are Transforming Our World

AIoT is a growing field with many potential benefits. Businesses that adopt AIoT can improve their efficiency, decision-making, customization, and safety. ... Increased efficiency: By combining AI with IoT, businesses can automate tasks and processes that would otherwise be performed manually. This can free up employees to focus on more important tasks and increase overall productivity. Improved decision-making: By collecting data from various sources and using AI to analyze it, businesses can gain insights they wouldn’t otherwise have. It can help businesses make more informed decisions, from product development to marketing. Greater customization: Businesses can create customized products and services tailored to their customers’ needs and preferences using data collected from IoT devices. This can lead to increased customer satisfaction and loyalty. Reduced costs: Businesses can reduce their labor costs by automating tasks and processes. Additionally, AIoT can help businesses reduce their energy costs by optimizing their use of resources. Increased safety: By monitoring conditions and using AI to identify potential hazards, businesses can take steps to prevent accidents and injuries.


It's time for manufacturers to build a collaborative cybersecurity team

Despite the best laid plans, bear in mind that these are active, interconnected and dynamic systems. It’s impossible to separate physical and cybersecurity elements, as their role in business operations is so foundational. As the landscape for new technologies and best practices change, adapt along with it. Ensure the lines of communication are open, management maintains involvement in the process, and all the key parties across IT and OT are committed to working collaboratively to strengthen every element of security. These tenets will help manufacturing organizations stay nimble in the face of an ever-changing security landscape. As the convergence of IT and OT continues, the risk of cyberthreats will continue to rise along with it. Building a collaborative security team across both IT and OT will help to reduce organizational risk and fortify critical infrastructure. By involving leadership, setting a plan, and staying adaptable as things change, security leaders will be armed with a comprehensive security approach that supports near-term needs and offers long-term business sustainability.


Why diverse recruitment is the key to closing the cyber-security skills gap

When it comes to mitigating the ever-evolving cyber threat, diversity is a crucial, but often overlooked, factor. As cyber attacks are becoming increasingly culturally nuanced, it is important that we meet the challenge by drawing from a wide range of backgrounds and life experiences. Cyber attacks come from everywhere - from a wide range of ages, locations, and educational backgrounds - so our responders should too. Perceptions of cyber security often see it as revolving around highly complex technology and driven mainly by this. While tech clearly plays a crucial role in mitigating cyber attacks, successfully countering them would not be possible without the role performed by people. This is enriched hugely by having a workforce which covers as many educational and socio-economic backgrounds as possible. In making a concerted effort towards a more diverse workforce, the cyber-security industry will be able to gain a deeper awareness of the cultural nuances that underlie cyber attacks. It’s important to fully understand what we mean by diverse hiring. Considering entry routes into the industry is a big part of attracting a broader range of demographics. 


You have mountains of data, but do you know how to climb?

We have more data than ever before, but it is not enough to merely accumulate it. Dedicate time and resources to establishing digital governance to ensure the data you are using is clean, consistently implemented, and universally understood. ... The tech team is not solely responsible for the quality of our data—we all need to take ownership of and champion the data we use. Visualization tools bridge the gap between the tech team and the business team, doing away with barriers to entry and enabling end-to-end analytics. In this way, you can empower employees to immerse themselves in and take ownership of the data at hand. Users no longer have to submit a request to the tech team to create a report and twiddle their thumbs until it comes back. They can now take initiative and do it themselves, creating a more streamlined process and a more informed group of employees who can work quickly to make data-driven decisions. Furthermore, when you empower people to take control of their data and ask their own questions, they may uncover new insights they would never have found when presented with pre-packaged reports.


Software Supply Chain Concerns Reach C-Suite

From Cornell's perspective, DevOps — or hopefully, DevSecOps groups — should really spearhead the management of software supply chain risk. "They are the ones who own the software development process, and they see the code that is written," he says. "They see the components that are pulled in. They watch the software get built. And they make it available to whoever is next on down the line." Given this vantage point, they can help to impact — in a positive way — an organization's software supply chain security status by implementing good policies and practices around what open source code is included in their software and when those open source components are upgraded. "Forward-leaning DevSecOps teams can take advantage of their automation and testing to start pushing for more aggressive component-upgrade life cycles and other approaches that help minimize technical debt," he explains. He says they’re also in a position and own the tooling to help generate SBOMs that they can then provide to software consumers who are in turn looking to manage their supply chain risk.


Know Your Risks – and Your Friends’ Risks, Too

Identifying risks and documenting response actions are only part of the equation. Crucial to the overall C-SCRM process is the communication and education of all parties involved about organizational risks and how to respond. Organizations must ensure that all personnel and third-party partners are trained on supply chain risks, encourage awareness from the top down, and involve partners and suppliers in organization-wide tests and assessments of response plans. Organizations should establish open communications with their supplier partners about risk concerns and encourage partners to do the same in return. The general idea is individual strength through community strength. As an organization matures its C-SCRM (or overall cybersecurity) process, lessons learned and best practices should be shared along the way to help bolster others’ programs. The concept of C-SCRM is not a new one. In fact, there are many sources that have provided guidance on the topic over the years. The National Institute of Standards and Technology (NIST) has a Special Publication (SP) 800-161 and an Internal Report (IR) 8276 on the subject. 


3 data quality metrics dataops should prioritize

The good news is that as business leaders trust their data, they’ll use it more for decision-making, analysis, and prediction. With that comes an expectation that the data, network, and systems for accessing key data sources are available and reliable. Ian Funnell, manager of developer relations at Matillion, says, “The key data quality metric for dataops teams to prioritize is availability. Data quality starts at the source because it’s the source data that run today’s business operations.” Funnell suggests that dataops must also show they can drive data and systems improvements. He says, “Dataops is concerned with the automation of the data processing life cycle that powers data integration and, when used properly, allows quick and reliable data processing changes.” Barr Moses, CEO and cofounder of Monte Carlo Data, shares a similar perspective. “After speaking with hundreds of data teams over the years about how they measure the impact of data quality or lack thereof, I found that two key metrics—time to detection and time to resolution for data downtime—offer a good start.”


How Optic Detects NFT Fraud with AI and Machine Learning

The NFT space has ongoing issues with fraud, including through bad actors wholesale lifting art from one project and using it in a second project — a process often referred to as “copyminting.” They are derivative projects that have a few too many similarities to the original project to be considered anything other than a ripoff. While most of these duplicate projects do very little sales volume relative to the original, they may damage the underlying brand, contribute to the overall distrust of the NFT space, or trick less savvy buyers into spending money on something that’s the jpg equivalent of a street vendor shilling fake Rolex watches. To help combat this fraud, a few companies are emerging that specialize in fraud detection in NFTs. They tend to leverage blockchain data to help determine which project came first and apply some image detection to find metadata matches. One of these solutions is Optic, which uses artificial intelligence and machine learning to analyze the images associated with an NFT, which helps NFT marketplaces and minting platforms catch copies and protect both creators and buyers.



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey

Daily Tech Digest - July 24, 2022

AI can see things we can’t – but does that include the future?

“What we focus on is augmented intelligence for humans to take action [on],” says Radtke when I raise this concern. “We are not prescribing the action to be taken based on the insights that we get – we're trying to make sure that the human has all the necessary intelligence to drive the behavior that they need to drive. We're reporting facts back – this actually happened here, this is what has happened in the past – and you can take action based on that. It's all about driving improved safety for everyone in that area.” When I press him on the possible human rights concern and the inevitable pushback that will arise if AI is routinely used to pre-emptively police areas deemed as problematic, he answers: “I think that with every technology that's ever been out there in history there is always a way to use it for non-good. I think you have to focus on the good that it can provide and make sure that you police the non-good behavior that could happen from it.” This will entail some sort of oversight. “There are consortiums out there to help drive the ethical adoption of AI throughout the industry – we definitely keep aware of those.


RPA vs. BPA: Which approach to automation should you use?

Where BPA and RPA overlap, according to Mullakara, is the goal of eliminating human intervention in order to process multiple automation. “The whole idea of BPA was to remove people from the process and that's kind of what RPA is also aiming for. In the sense of the simple workflow automation, both can do it. RPA does it through a UI integration whereas BPA does it mostly with APIs. And you know, automating the workflow with the systems by invoking the systems,” he tells us. However, Taulli explains that automation really won’t get rid of people at this point and it will be the usual suspects that will, such as recessions. Mullakara agrees that this messaging for BPA and RPA is a common misconception and has earned both technologies quite a bad rap. “So, what you actually automate with RPA for example is tasks – it's not jobs. It's not an entire job even if it's a process. It’s not jobs, so we still need people,” he says. 


All the Things a Service Mesh Can Do

Many organizations have different teams and services dispersed across different networks and regions of a given cloud. Many also have services deployed across multiple cloud environments. Securely connecting these services across different cloud networks is a highly desirable function that typically requires significant effort by network teams. In addition, limitations that require non-overlapping Classless Inter-Domain Routing (CIDR) ranges between subnets can prevent network connectivity between virtual private clouds (VPCs) and virtual networks (VNETs). Service mesh products can securely connect services running on different cloud networks without requiring the same level of effort. HashiCorp Consul, for example, supports a multidata center topology that uses mesh gateways to establish secure connections between multiple Consul deployments running in different networks across clouds. Team A can deploy a Consul cluster on EKS. Team B can deploy a separate Consul cluster on AKS. Team C can deploy a Consul cluster on virtual machines in a private on-premises data center. 


Snowballing Ransomware Variants Highlight Growing Threat to VMware ESXi Environments

The proliferation of ransomware targeting ESXi systems poses a major threat to organizations using the technology, security experts have noted. An attacker that gains access to an EXSi host system can infect all virtual machines running on it and the host itself. If the host is part of a larger cluster with shared storage volumes, an attacker can infect all VMs in the cluster as well, causing widespread damage. "If a VMware guest server is encrypted at the operating system level, recovery from VMware backups or snapshots can be fairly easy," McGuffin says. '[But] if the VMware server itself is used to encrypt the guests, those backups and snapshots are likely encrypted as well." Recovering from such an attack would require first recovering the infrastructure and then the virtual machines. "Organizations should consider truly offline storage for backups where they will be unavailable for attackers to encrypt," McGuffin adds. Vulnerabilities are another factor that is likely fueling attacker interest in ESXi. VMware has disclosed multiple vulnerabilities in recent months.


5 typical beginner mistakes in Machine Learning

Tree-based models don’t need data normalization as feature raw values are not used as multipliers and outliers don’t impact them. Neural Networks might not need the explicit normalization as well — for example, if the network already contains the layer handling normalization inside (e.g. BatchNormalization of Keras library). And in some cases, even Linear Regression might not need data normalization. This is when all the features are already in similar value ranges and have the same meaning. For example, if the model is applied for the time-series data and all the features are the historical values of the same parameter. In practice, applying unneeded data normalization won’t necessarily hurt the model. Mostly, the results in these cases will be very similar to skipped normalization. However, having additional unnecessary data transformation will complicate the solution and will increase the risk of introducing some bugs.


Git for Network Engineers Series – The Basics

Version control systems, primarily Git, are becoming more and more prevalent outside of the realm of software development. The increase in DevOps, network automation, and infrastructure as code practices over the last decade has made it even more important to not only be familiar with Git, but proficient with it. As teams move into the realm of infrastructure as code, understanding and using Git is a key skill. ... Unlike other Version Control Systems, Git uses a snapshot method to track changes instead of a delta-based method. Every time you commit in Git, it basically takes a snapshot of those files that have been changed while simply linking unchanged files to a previous snapshot, efficiently storing the history of the files. Think of it as a series of snapshots where only the changed files are referenced in the snapshot, and unchanged files are referenced in previous snapshots. Git operations are local, for the most part, meaning it does not need to interact with a remote or central repository. 


Deep learning delivers proactive cyber defense

The timing couldn’t be better. The increasing availability of ransomware-as-a-service offerings, such as ransomware kits and target lists, are making it easier than ever for bad actors—even those with limited experience—to launch a ransomware attack, causing crippling damage in the very first moments of infection. Other sophisticated attackers use targeted strikes, in which the ransomware is placed inside the network to trigger on command. Another cause for concern is the increasing disappearance of an IT environment’s perimeter as cloud compute storage and resources move to the edge. Today’s organizations must secure endpoints or entry points of end-user devices, such as desktops, laptops, and mobile devices, from being exploited by malicious hackers—a challenging feat, according to Michael Suby, research vice president, security and trust, at IDC. “Attacks continue to evolve, as do the endpoints themselves and the end users who utilize their devices,” he says. “These dynamic circumstances create a trifecta for bad actors to enter and establish a presence on any endpoint and use that endpoint to stage an attack sequence.”


Towards Geometric Deep Learning III: First Geometric Architectures

The neocognitron consisted of interleaved S- and C-layers of neurons (a naming convention reflecting its inspiration in the biological visual cortex); the neurons in each layer were arranged in 2D arrays following the structure of the input image (‘retinotopic’), with multiple ‘cell-planes’ (feature maps in modern terminology) per layer. The S-layers were designed to be translationally symmetric: they aggregated inputs from a local receptive field using shared learnable weights, resulting in cells in a single cell-plane have receptive fields of the same function, but at different positions. The rationale was to pick up patterns that could appear anywhere in the input. The C-layers were fixed and performed local pooling (a weighted average), affording insensitivity to the specific location of the pattern: a C-neuron would be activated if any of the neurons in its input are activated. Since the main application of the neocognitron was character recognition, translation invariance was crucial. 


Don’t Just Climb the Ladder. Explore the Jungle Gym

Most of us do not approach work (or life) with a master plan in mind, and many of the steps we take are beautiful accidents that help us become who we are. “I’m 67 years old,” Guy said, “and I think I finally found my true calling.” He was referring to his podcast, Remarkable People, where he interviews exceptional leaders and innovators (think Jane Goodall, Neil deGrasse Tyson, Steve Wozniak, and Kristi Yamaguchi) about how they got to be remarkable. “In a sense, my whole career has prepared me for this moment. I’ve had decades of experience in startups and large companies. So that gives me the data to ask great questions that my listeners really want the answers to,” Guy said. Guy is undeniably brilliant, and his success is no accident. But still, he believes that luck has played a part in his success. In his words, “Basically, I’ve come to the conclusion that it’s better to be lucky than smart.” Maybe Guy is right. Or perhaps, the smartest people know when to take advantage of luck and act on the opportunities that present themselves. Whatever the case, it’s important to take calculated risks.


Should You Invest in a Digital Transformation Office?

With the digital transformation office comes a transformation team, who initiates organizational change. Laute says that it’s crucial that everyone inside the organization stand behind the transformation team if they truly want to see changes happening. “You need to have an environment where these people, the transformation lead and the transformation team, are allowed and are not afraid to speak up. These people shouldn't be biased, not just following what the executive board says, but really [being] able to challenge and to speak up. And they should have the freedom to call out if something is going in the wrong direction, may it be content or behavioral-wise,” she explains. And while clearly there can be technology-related challenges, Laute tells us that digital transformation is also a people problem, and calls for a change in culture and mindset in order to find success. The cultural shift, she explains, is truly where everything starts to come together in order to get the transformation going. “Digital [transformation] is not only technology. You need to change behaviors and you need to change processes. And most of the time, you change your target operating model, right?”



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley

Daily Tech Digest - July 23, 2022

How CIOs can unite sustainability and technology

CIOs must be proactive in progressing these organizational shifts, as business leaders will continue to lean on them to ensure company technologies are providing solutions without contributing to an environmental problem. While in years past this was not an active concern, the information and communications technology (ICT) sector has recently become a larger source of climate-related impact. Producing only 1.5% of CO2 in 2007, the industry has now risen to 4% today and will potentially reach 14% by 2040. Fortunately, CIOs can course-correct by focusing on three key areas: Net zero - Utilize green software practices that can reduce energy consumption; Trust - Build systems that protect privacy and are fair, transparent, robust, and accessible; and Governance - Make ESG the focus of technology, not an afterthought. As a first step in this transition, CIOs can begin assessing their organization’s technology through the lens of sustainability to ensure that those goals are being thought about in every facet of the business. In addition, they can connect with other leaders in the company to encourage greater emphasis and dialogue in cross-organization planning for technology solutions as they relate to sustainability targets.


Design patterns for asynchronous API communication

Request and response topics are more or less what they sound like:A client sends a request message through a topic to a consumer; The consumer performs some action, then returns a response message through a topic back to the consumer. This pattern is a little less generally useful than the previous two. In general, this pattern creates an orchestration architecture, where a service explicitly tells other services what to do. There are a couple of reasons why you might want to use topics to power this instead of synchronous APIs:You want to keep the low coupling between services that a message broker gives us. If the service that’s doing the work ever changes, the producing service doesn’t need to know about it, since it’s just firing a request into a topic rather than directly asking a service. The task takes a long time to finish, to the point where a synchronous request would often time out. In this case, you may decide to make use of the response topic but still make your request synchronously. You’re already using a message broker for most of your communication and want to make use of the existing schema enforcement and backwards compatibility that are automatically supported by the tools used with Kafka.


What is Data Gravity? AWS, Azure Pull Data to the Cloud

As enterprises create ever more data, they aggregate, store, and exchange this data, attracting progressively more applications and services to begin analyzing and processing their data. This “attraction” is caused, because these applications and services require higher bandwidth and/or lower latency access to the data. Therefore, as data accumulates in size, instead of pushing data over networks towards applications and services, “gravity” begins pulling applications and services to the data. This process repeats, which produces a compounding effect, meaning that as the scale of data grows, it becomes “heavier” and increasingly difficult to replicate and relocate. Ultimately, the “weight” of this data being created and stored generates a “force” that results in an inability to move the data, hence the term data gravity. Data gravity presents a fundamental problem for enterprises, which is the inability to move data at-scale. Consequently, data gravity impedes enterprise workflow performance, heightens security & regulatory concerns, and increases costs.


Windows 11 is getting a new security setting to block ransomware attacks

The new feature is rolling out to Windows 11 in a recent Insider test build, but the feature is also being backported to Windows 10 desktop and server, according to Dave Weston, vice president of OS Security and Enterprise at Microsoft. "Win11 builds now have a DEFAULT account lockout policy to mitigate RDP and other brute force password vectors. This technique is very commonly used in Human Operated Ransomware and other attacks – this control will make brute forcing much harder which is awesome!," Weston tweeted. Weston emphasized "default" because the policy is already an option in Windows 10 but isn't enabled by default. That's big news and is a parallel to Microsoft's default block on internet macros in Office on Windows devices, which is also a major avenue for malware attacks on Windows systems through email attachments and links. Microsoft paused the default internet macro block this month but will re-release the default macro block soon. The default block on untrusted macros is a powerful control against a technique that relied on end users being tricked into clicking an option to enable macros, despite warnings in Office against doing so.


Untangling Enterprise API Architecture with GraphQL

GraphQL is a query language that allows you to describe your data requirements in a more powerful and developer-friendly way than REST or SOAP. Its composability can help untangle enterprise API architecture. GraphQL becomes the communication layer for your services. Using the GraphQL specification, you get a unified experience when interacting with your services. Every service in your API architecture becomes a graph that exposes a GraphQL API. In this graph, everyone who wants to integrate or consume the GraphQL API can find all the data it contains. Data in GraphQL is represented by a schema that describes the available data structures, the shape of the data and how to retrieve it. Schemas must comply with the GraphQL specification, and the part of the organization responsible for the service can keep this schema coherent. GraphQL composability allows you to combine these different graphs — or subgraphs — into one unified graph. Many tools are available to create such a “graph of graphs."


How The Great Resignation Will Become The Great Reconfiguration

We are witnessing a great reconfiguration of how employees expect to be treated by employers. Henry Ford gave his workers a full two-day weekend as early as 1926, but now a weekend is expected in most office-based jobs—unless the job involves serving customers over the weekend! We have certain expectations of the employer and employee relationship, and what was normal before the pandemic is now being challenged. Even Wall Street cannot hold back the tide. People expect more flexibility over their hours and work location. Within a few years, this will be normalized by the effect of the top talent expecting it and that expectation fitering throughout company culture. This is how work will function post-pandemic. The Great Resignation is the first step, but eventually, I believe we will call the 2020s the Great Reconfiguration. ... WFH will live on - You might want your team back in the office, but they know they can be more productive remotely, and research backs up the employees. A new Harvard study suggests that all that in-person time can be compressed into just one or two days a week.


Will Your Cyber-Insurance Premiums Protect You in Times of War?

Due to the changing market and geopolitical situation, you need to be keenly aware of the exact kind of cyber-insurance coverage your organization requires. Your decisions should be dictated by the industry you're working in, the security risk, and how much you stand to lose in the event of an attack. It's important to note that insurance providers are also being more stringent in their requirements for companies to even obtain cyber coverage in the first place. Carriers are increasingly requiring companies to practice good cyber hygiene and have rigid cybersecurity protocols in place before even offering a quote. Once you have proper cybersecurity protocols in place, you should better qualify for adequate plans. However, remember that no two plans are alike or equally inclusive. When choosing a plan, be sure to look for any fine print regarding act-of-war and terrorism exclusions or those for other "hostile acts." Even when you've done everything right, your carrier can still attempt to deny you coverage under these loopholes.


The new CIO playbook: 7 tips for success from day one

It’s possible that, up to now, your focus has been solely on technology. One of the big differentiators between working on an IT team, even in a leadership role, and being CIO is that you will need to understand how technology fits into the larger business goals of the company. You will need to be a technology translator and advocate for the CEO, business leadership, and board. For that, you have to understand the business first. “We can come up with creative technical solutions,” says Roberge. “We know you need an email system, a CRM system, and an ERP. But how does the business want to use those tools? How is the sales guy going sell product and be able to get a quote out, get the tax requirements, things like that?” Business leaders are unlikely to understand technology the way you do. So, you must understand the business in order to help the other business units, the CEO, and the board understand how technology can fit into their goals. “As technology experts, we know our technology extremely well,” says Roberge.


Explained: How to tell if artificial intelligence is working the way we want it to

Far from a silver bullet, explanation methods have their share of problems. For one, Ghassemi’s recent research has shown that explanation methods can perpetuate biases and lead to worse outcomes for people from disadvantaged groups. Another pitfall of explanation methods is that it is often impossible to tell if the explanation method is correct in the first place. One would need to compare the explanations to the actual model, but since the user doesn’t know how the model works, this is circular logic, Zhou says. He and other researchers are working on improving explanation methods so they are more faithful to the actual model’s predictions, but Zhou cautions that, even the best explanation should be taken with a grain of salt. “In addition, people generally perceive these models to be human-like decision makers, and we are prone to overgeneralization. We need to calm people down and hold them back to really make sure that the generalized model understanding they build from these local explanations are balanced,” he adds.


Future-Proofing Organisations Through Transparency

Partners that trust each other, perform better. Both parties should clearly understand the decisions and actions they own. Consequently, organisations cooperate with less friction and enhance accessibility to relevant information. A study in the Harvard Business Review notes that managers frequently adopt a trust but verify approach, evaluating potential partner behaviours during negotiations to determine whether they are open and honest. As one manager in the study advised, “To see if [the] person is forthcoming; ask a question you know the answer to”. Transparent companies are viewed as ‘ethical’ as their customers believe they have nothing to hide. The new era of the business-to-business model demands transparency. Companies want to know that what they do matters and trace a project back to their organisation’s vision. In a modern world where sustainability is not just a buzzword, clients want to know that partnerships are built with brands that support their morals. Unsatisfied customers disengage with a company to find one that works together to achieve a greater outcome and takes accountability for their actions. 



Quote for the day:

"People will not change their minds but they will make new decisions based upon new information." -- Orrin Woodward

Daily Tech Digest - July 22, 2022

Can automated test tools eliminate QA?

The traditional quality assurance process is multi-step and requires at least two types of software testers: The first tester exercises data edit and processing functions in applications, and they ensure that all of these processes are working correctly. The second QA tester is more familiar with the business’s needs and how the application should address them. This tester is usually savvy about application technical details as well as the business systems with which the application is going to interact. But there’s more to QA than just these two front-running functions. Applications must be integration-tested to ensure that they interact and exchange data with all of the different systems and data that they work with. They must also be moved to application staging areas where they can be regression tested. This ensures that they don’t break any other existing software with which they interface and that they can run the maximum amount of transactions for which they were designed in production. From an IT standpoint, applications must pass through all of these hurdles before they can go live. 


The downside of digital transformation: why organisations must allow for those who can’t or won’t move online

Through our current research we find the reality of a digitally enabled society is, in fact, far from perfect and frictionless. Our preliminary findings point to the need to better understand the outcomes of digital transformation at a more nuanced, individual level. Reasons vary as to why a significant number of people find accessing and navigating online services difficult. And it’s often an intersection of multiple causes related to finance, education, culture, language, trust or well-being. Even when given access to digital technology and skills, the complexity of many online requirements and the chaotic life situations some people experience limit their ability to engage with digital services in a productive and meaningful way. The resulting sense of disenfranchisement and loss of control is regrettable, but it isn’t inevitable. Some organisations are now looking for alternatives to a single-minded focus on transferring services online. Other organisations are considering partnerships with intermediaries who can work with individuals who find engaging with digital services difficult.


Authentic leadership: Building an organization that thrives

Becoming an authentic leader takes a lot of self-reflection and self-awareness. You’ll need to work to understand yourself and others, using empathy and compassion as your driving force. For examples of authentic leadership in the tech industry, you can look to former CEO of Apple Steve Jobs, former CEO of GE Jack Welch, former CEO of Xerox Anne Mulcahy, and former CEO of IBM Sam Palmisano. These leaders are all known for their authentic leadership styles that helped them drive business success. To become an authentic leader, you’ll need to embark on a path of self-discovery, establish a strong set of values and principles that will guide you in your decision-making, and be completely honest with yourself about who you are. An authentic leader isn’t afraid to make mistakes or to own up to mistakes when they happen. You’ll need to make sure you’re someone who takes accountability, maintains calm under pressure, and can be vulnerable with coworkers and employees. It’s important to know your own strengths and weaknesses as an authentic leader and to identify how you cope with success, failure, and setbacks. 


Reporting to build trust: A framework

Whether you’re preparing an integrated annual report or a stand-alone sustainability report, the publication has to be informed by steps one and two. It’s also critical to put the right resources in place, in terms of both time and people, along with the right incentives and the right oversight. Companies can truly be confident in what they report only when it is subject to board oversight, relevant to the company’s strategy, and has the right governance, systems and controls in place to measure progress towards targets and plans. Many large companies that have teams of hundreds working on financial reporting often have only a handful of people working on sustainability reporting. Even with the best intentions, less-resourced areas have a higher potential to miss something that turns out to be critically important. The business world’s financial reporting capabilities have been built over 170 years. When it comes to sustainability reporting, we need to move quickly to build the right capabilities—using what we’ve learned from financial reporting. And if sustainability reporting is to be on par with financial reporting for informing resource allocation decisions, it needs to be just as robust and relevant. 


Six reasons successful leaders love questions

Comparing questions to dreams is Straus’s way of saying that questions hold the key to better understanding the subconscious dimensions of the person asking the questions. It can be extremely difficult to understand why employees think the way they do, and how to help them change their mindset and behavior if required. It then stands to reason that questions might also help leaders better understand the culture and habits of their organization. In his 1988 article, “Toward a History of the Question,” Dutch philosopher C.E.M. Struyker Boudier writes, “In and by way of his questions the human being can reach out to the divine, and likewise degrade himself to the demonic inferno of evil.” Questioning forces people to the line between good and bad, yes and no, pro and con. Asking questions is closely related to making a choice. We cannot address everything at once, so to ask a question, we must decide what to focus on and how. We have the choice to take an approach that is optimistic or pessimistic, abstract or concrete, individual or collective, broad or narrow, past- or future-oriented, etc. 


Discovering the Versatility of OpenEBS

OpenEBS provides storage for stateful applications running on Kubernetes; including dynamic local persistent volumes (like the Rancher local path provisioner) or replicated volumes using various "data engines". Similarly to Prometheus, which can be deployed on a Raspberry Pi to monitor the temperature of your beer or sourdough cultures in your basement, but also scaled up to monitor hundreds of thousands of servers, OpenEBS can be used for simple projects, quick demos, but also large clusters with sophisticated storage needs. OpenEBS supports many different "data engines", and that can be a bit overwhelming at first. But these data engines are precisely what makes OpenEBS so versatile. There are "local PV" engines that typically require little or no configuration, offer good performance, but exist on a single node and become unavailable if that node goes down. And there are replicated engines that offer resilience against node failures. Some of these replicated engines are super easy to set up, but the ones offering the best performance and features will take a bit more work.


Cyber Resiliency: What It Is and How To Build It

Creating a cyber-resilience plan requires buy-in and input from all parts of the organization, including finance, IT, and operations. “It’s important that departments work together to classify information and risk, as well as to determine where to put controls and where responsibilities lie,” Piker says. “Once a plan has been agreed upon, a budget must be carved out to fund the actual implementation of the plan.” It's important to engage the entire organization. “This is not just a technical issue under the control of a CIO or CISO,” Adkins says. “Your employees and vendors can play a critical role in spotting potential attacks to limit their impact.” Additionally, with the continuing trend toward remote work, employee cyber awareness and training is more important than ever. “This means formal policies, training, exercises simulation, and ongoing analysis of risks,” Adkins says. Adkins advises organizations to use tabletop exercises to test incident practices and times. “It's much easier to fix a flaw in your planning and processes when you’re not in the middle of a crisis,” he says. 


How kitemarks are kicking off IoT regulation

Interestingly, all those we have seen apply for the scheme have chosen to go for Gold because they want to be seen to be adhering to the highest levels and it’s been attracting some big international consumer brands. The smaller players that previously had difficulty understanding and navigating the red tape involved in the Code of Practice/ETSI have also valued the guidance and human touch of an assessor. The theory is that the product assurance scheme will spur compliance ahead of the PSTI, making the transition that much easier for the IoT industry, and the fact that many have aimed high suggests the approach is working. Manufacturers like the visibility conferred by the badge, which then becomes a differentiator in the marketplace, as well as ensuring future compliance. It’s for these reasons that many watching the assurance rollout with interest. IoT kitemark schemes vary internationally, from labels that denote compliance with a set of cybersecurity criteria, to a single label that attests basic security features are provided, to several tiers or even a label that lists cybersecurity information about the IoT device.


4 tips for leading remote IT teams

Traditional enterprises tend to have a “we will train our employees only as much as we have to” mentality. However, this approach will make your employees more likely to seek other opportunities where they feel more valued and prepared. Of course, there is always the risk of employees leaving with their newfound skills, but having undertrained employees can be worse for your business and the organization. Set aside a generous annual budget for training and development and help map out a personalized training path for each employee. This is critical to employee happiness and long-term business planning. These plans should also demonstrate growth opportunities that benefit each employee – not just the organization. In-person training is great, but don’t underestimate the value of virtual training. While a personal connection with instructors can often provide more knowledge and attention, the convenience of virtual training makes it a popular alternative these days. Encourage your employees to explore training opportunities where they’re located.


How Microcontainers Gain Against Large Containers

A microcontainer is an optimized container modified toward better efficiency. It still contains all the files to provide more scaling, isolation, and parity to the software application. However, it is an improved container, with an optimized number of files kept in the image. Important files left in the microcontainer are shell, package manager, and standard C library. In parallel, there exists a concept of ‘distroless’ in a field of containers, where all the unused files are fully extracted from the image. It is worth emphasizing the distinction between the concept of microcontainer and distroless. Microcontainer still contains unused files, as they are required for the system to stay completed. Microcontainer is based on the same system of operation as the regular container and performs all the same functions, with the only difference that its internal files have been enhanced and its size got smaller due to the improvements done by developers. Microcontainer contains an optimized number of files, so it still includes all files and dependencies required for application run, but in a lighter and smaller format. 



Quote for the day:

"The first task of a leader is to keep hope alive." -- Joe Batten

Daily Tech Digest - July 21, 2022

Google Launches Carbon, an Experimental Replacement for C++

While Carbon began as a Google internal project, the development team ultimately wants to reduce contributions from Google, or any other single company, to less than 50% by the end of the year. They ultimately want to hand the project off to an independent software foundation, where its development will be led by volunteers. ... The design wants to release a core working version (“0.1”) by the end of the year. Carbon will be built on a foundation on modern programming principles, including a generics system, that would remove the need to check and recheck the code for each instantiation. Another much needed feature lacking in C++ is memory safety. Memory access bugs are one of the largest culprits of security exploits. Carbon designers will look for ways to better track uninitialized states, design APIs and idioms that support dynamic bounds checks, and build a comprehensive default debug build mode. Over time, the designers plan to build a safe Carbon subset. ... Carbon is for those developers who already have large codebases in C++, which are difficult to convert into Rust. Carbon is specifically what Carruth called a “successor language,” which is built atop of an already existing ecosystem, C++ in this case.


The Cost of Production Blindness

DevOps and SRE are roles that didn’t exist back then. Yet today, they’re often essential for major businesses. They brought with them tremendous advancements to the reliability of production, but they also brought with them a cost: distance. Production is in the amorphous cloud, which is accessible everywhere. Yet it’s never been further away from the people who wrote the software powering it. We no longer have the fundamental insight we took for granted a bit over a decade ago. Yes, and no. We gave up some insight and control and got a lot in return: Stability; Simplicity; and Security. These are pretty incredible benefits. We don’t want to give these benefits up. But we also lost some insight, debugging became harder, and complexity rose. ... Log ingestion is probably the most expensive feature in your application. Removing a single line of log code can end up saving thousands of dollars in ingestion and storage costs. We tend to overlog since the alternative is production issues that we can’t trace to their root cause. We need a middle ground. We want the ability to follow an issue through without overlogging. Developer observability lets you add logs dynamically as needed into production.


UK government introduces data reforms legislation to Parliament

Suggested changes included removing organisations’ requirements to designate data protection officers (DPOs), ending the need for mandatory data protection impact assessments (DPIAs), introducing a “fee regime” for subject access requests (SARs), and removing the requirement to review data adequacy decisions every four years. All of these are now included in the updated Bill in some form. “We now have confirmation of what the UK’s post-GDPR data framework is intended to look like,” said Edward Machin, a senior lawyer in Ropes & Gray’s data, privacy and cyber security practice. ... “The GDPR isn’t perfect and it would be foolish for the UK not to learn from those lessons in its own approach, but it’s walking a tightrope between improvements to the current framework and performative changes for the sake of ripping up Brussels red tape. My initial impressions of the Bill are that the government has struck the balance in favour of business and overlooked some civil society concerns, so I would think that reduced rights and safeguards for individuals will be areas that are targeted for revision before the Bill is finalised.”
 

Hackers can spoof commit metadata to create false GitHub repositories

Researchers identified that a threat actor could tamper with commit metadata to make a repository appear older than it is. Or else, they can deceive developers by promoting the repositories as trusted since reputable contributors are maintaining them. It is also possible to spoof the committer’s identity and attribute the commit to a genuine GitHub account. For your information, with open source software, developers can create apps faster and even skip third-party’s code auditing if they are sure that the source of software is reliable. They can choose GitHub repositories maintained actively, or their contributors are trustworthy. Checkmarx researchers explained in their blog post that threat actors could manipulate the timestamps of the commits, which are listed on GitHub. Fake commits can also be generated automatically and added to the user’s GitHub activity graph, allowing the attacker to make it appear active on the platform for a long time. The activity graph displays activity on private and public repositories, making it impossible to discredit the fake commits.


Hackers turn to cloud storage services in attempt to hide their attacks

The group is widely believed to be linked to the Russian Foreign Intelligence Service (SVR), responsible for several major cyberattacks, including the supply chain attack against SolarWinds, the US Democratic National Committee (DNC) hack, and espionage campaigns targeting governments and embassies around the world. Now they're attempting to use legitimate cloud services, including Google Drive and Dropbox – and have already used this tactic as part of attacks that took place between May and June this year. The attacks begin with phishing emails sent out to targets at European embassies, posing as invites to meetings with ambassadors, complete with a supposed agenda attached as a PDF. The PDF is malicious and, if it worked as intended, it would call out to a Dropbox account run by the attackers to secretly deliver Cobalt Strike – a penetration-testing tool popular with malicious attackers – to the victim's device. However, this initial call out was unsuccessful earlier this year, something researchers suggest is down to restrictive policies on corporate networks about using third-party services.

 

How Zero Trust can stop the catastrophic outcomes of cyberattacks on critical infrastructure

The impending necessity of Zero Trust should be recognised by every government and CNI provider around the world if they are to have any hopes of mitigating sophisticated attacks like ransomware. Critical Infrastructure is the backbone of a country’s economy and social order. It is impossible to maintain a sustainable society when sectors like emergency healthcare, energy distribution, food and agriculture, education, and financial services are constantly under disruptive threats. In May 2021, the US government issued an executive order for federal government agencies, to improve their cybersecurity postures and recommended moving toward a Zero Trust architecture as the solution. Following this executive order, the Pentagon launched a Zero Trust office in December 2021 and in January 2022, President Biden further emphasised the urgency of moving to a Zero Trust architecture by mandating all government agencies to achieve specific Zero Trust goals by the end of the Fiscal Year 2024.


Transparency in the shadowy world of cyberattacks

Focusing on the fundamentals of software security is in some ways more important to raise all of us above the level of insecurity we see today. We curate and use threat intelligence to protect billions of users–and have been doing so for some time. But you need more than intelligence, and you need more than security products–you need secure products. Security has to be built in, not just bolted on. Aurora showed us that we (and many in the industry) were doing cybersecurity wrong. Security back then was often “crunchy on the outside, chewy in the middle.” Great for candy bars, not so great for preventing attacks. We were building high walls to keep bad actors out, but if they got past those walls, they had wide internal access. The attack helped us recognize that our approach needed to change–that we needed to double down on security by design. We needed a future-oriented network, one that reflected the openness, flexibility, and interoperability of the internet, and the way people and organizations were already increasingly working. In short, we knew that we had to redesign security for the Cloud.


The importance of secure passwords can’t be emphasized enough

Mobile phones are a main and often overlooked concern. We found that 30% of respondents do not use antivirus on their phones, meaning they are not properly securing their devices. This is especially a concern as the demographic most often on their phones are also the ones who are less worried about online threats and vulnerabilities. Password managers, passwords stored in an electronic file and or in physical format are used most frequently for work devices and least frequently for personal phones. The Autofill option and password managers are used most often by 25-44-year-olds and hard format is used more by those between 55-65. But even if work accounts are secure, that doesn’t mean that sensitive information from work doesn’t carry over onto personal phones. Email and communication apps connected to work accounts are often downloaded onto personal devices, and if someone uses the same passwords across accounts, their personal devices being compromised means their work ones are as well.


Unlocking the potential of AI to increase customer retention

A true AI-fuelled CRM goes beyond simple automation. To provide real benefit, AI must aggregate data from multiple different sources — including in house-sales, marketing, and service tools. It needs to break down organisational silos to identify patterns in interactions and offer deeper customer insights. Some feel they don’t necessarily have enough primary data to build effective predictive models. There are vast amounts of organisational data generated around a single customer or prospect. The trick is to leverage a CRM that understands and captures all of these interactions in a format that can fuel AI initiatives. By breaking down the silos between business units and integrating all of the valuable data that they hold, organisations will be able to benefit from the most advanced predictive models. This is often more challenging than it should be to implement. Business systems are typically good at providing a snapshot of an organisation on any given day, but they aren’t usually as good at gathering historical information. 


Burnout: 3 steps to prevent it on your team

Company culture doesn’t just happen. Leaders must actively maintain and shape it to identify ongoing opportunities that empower employees to support and contribute to it. Employee contributions can be as small as internal pulse surveys or as large as designing new groups or initiatives. Think about creating a club to encourage the workforce to participate in the hiring process and weigh in on how candidates would mesh with internal teams. This engagement would directly shape how the organization operates and builds positive working environments for employees – no matter the physical or remote work setting. By opening the door for employees to get involved and provide input, leaders can identify signs of fatigue earlier, address pain points before employees reach the pinnacle of exhaustion, and create a community that motivates and engages the workforce. ... Too often, leaders view benefits as the silver bullet to burnout. But benefits alone won’t cure feelings of burnout. If your workforce is giving direct feedback on areas that need improvement, simply listening is not enough. Take action to meet these needs and make your actions known.



Quote for the day:

"Take time to deliberate; but when the time for action arrives, stop thinking and go in." - - Andrew Jackson

Daily Tech Digest - July 20, 2022

CIOs contend with rising cloud costs

“A lot of our clients are stuck in the middle,” says Ashley Skyrme, senior managing director and leader of the Global Cloud First Strategy and Consulting practice at Accenture. “The spigot is turned on and they have these mounting costs because cloud availability and scalability are high, and more businesses are adopting it.” And as the migration furthers, cloud costs soon rank second — next only to payroll — in the corporate purse, experts say. The complexity of navigating cloud use and costs has spawned a cottage industry of SaaS providers lining up to help enterprises slash their cloud bills. ... “Cloud costs are rising,” says Bill VanCuren, CIO of NCR. “We plan to manage within the large volume agreement and other techniques to reduce VMs [virtual machines].” Naturally, heavy cloud use is compounding the costs of maintaining or decommissioning data centers that are being kept online to ensure business continuity as the migration to the cloud continues. But more significant to the rising cost problem is the lack of understanding that the compute, storage, and consumption models on the public cloud are varied, complicated, and often misunderstood, experts say.


How WiFi 7 will transform business

In practice, WiFi 7 might not be rolled out for another couple of years — especially as many countries have yet to delicense the new 6GHz spectrum for public use. However, it is coming, and so it’s important to plan for this development as plans could progress quicker than we first thought. In the same way as bigger motorways are built and traffic increases to fill them, faster, more stable WiFi will encourage more usage & users, and to quote the popular business mantra: “If you build it…they will come….”. WiFi 7 is a significant improvement over all the past WiFi standards. It uses the same spectrum chunks as WiFi 6/6e, and can deliver data more than twice as fast. It has a much wider bandwidth for each channel as well as a raft of other improvements. It is thought that WiFi 7 could deliver speeds of 30 gigabits per second (Gbps) to compatible devices and that the new standard could make running cables between devices completely obsolete. It’s now not necessarily about what you can do with the data, but how you actually physically interact with it. 


How to Innovate Fast with API-First and API-Led Integration

Many have assembled their own technologies as they have tried to deliver a more productive, cloud native platform-as-a-shared-service that different teams can use to create, compose and manage services and APIs. They try to combine integration, service development and API-management technologies on top of container-based technologies like Docker and Kubernetes. Then they add tooling on top to implement DevOps and CI/CD pipelines. Afterward comes the first services and APIs to help expose legacy systems via integration, for example. When developers have access to such a platform within their preferred tools and can reuse core APIs instead of spending time on legacy integration, it means they can spend more time on designing and building the value-added APIs faster. At best, a group can use all the capabilities because it spreads the adoption of best practices, helps get teams ramped up faster and makes them deliver quicker. But at the very least, APIs should be shared and governed together.


Using Apache Kafka to process 1 trillion inter-service messages

One important decision we made for the Messagebus cluster is to only allow one proto message per topic. This is configured in Messagebus Schema and enforced by the Messagebus-Client. This was a good decision to enable easy adoption, but it has led to numerous topics existing. When you consider that for each topic we create, we add numerous partitions and replicate them with a replication factor of at least three for resilience, there is a lot of potential to optimize compute for our lower throughput topics. ... Making it easy for teams to observe Kafka is essential for our decoupled engineering model to be successful. We therefore have automated metrics and alert creation wherever we can to ensure that all the engineering teams have a wealth of information available to them to respond to any issues that arise in a timely manner. We use Salt to manage our infrastructure configuration and follow a Gitops style model, where our repo holds the source of truth for the state of our infrastructure. To add a new Kafka topic, our engineers make a pull request into this repo and add a couple of lines of YAML. 


Load Testing: An Unorthodox Guide

A common shortcut is to generate the load on the same machine (i.e. the developer’s laptop), that the server is running on. What’s problematic about that? Generating load needs CPU/Memory/Network Traffic/IO and that will naturally skew your test results, as to what capacity your server can handle requests. Hence, you’ll want to introduce the concept of a loader: A loader is nothing more than a machine that runs e.g. an HTTP Client that fires off requests against your server. A loader sends n-RPS (requests per second) and, of course, you’ll be able to adjust the number across test runs. You can start with a single loader for your load tests, but once that loader struggles to generate the load, you’ll want to have multiple loaders. (Like 3 in the graphic above, though there is nothing magical about 3, it could be 2, it could be 50). It’s also important that the loader generates those requests at a constant rate, best done asynchronously, so that response processing doesn’t get in the way of sending out new requests. ... Bonus points if the loaders aren’t on the same physical machine, i.e. not just adjacent VMs, all sharing the same underlying hardware. 


Open-Source Testing: Why Bug Bounty Programs Should Be Embraced, Not Feared

There are two main challenges: one around decision-making, and another around integrations. Regarding decision-making, the process can really vary according to the project. For example, if you are talking about something like Rails, then there is an accountable group of people who agree on a timetable for releases and so on. However, within the decentralized ecosystem, these decisions may be taken by the community. For example, the DeFi protocol Compound found itself in a situation last year where in order to agree to have a particular bug fixed, token-holders had to vote to approve the proposal. ... When it comes to integrations, these often cause problems for testers, even if their product is not itself open-source. Developers include packages or modules that are written and maintained by volunteers outside the company, where there is no SLA in force and no process for claiming compensation if your application breaks because an open-source third party library has not been updated, or if your build script pulls in a later version of a package that is not compatible with the application under test.


3 automation trends happening right now

IT automation specifically continues to grow as a budget priority for CIOs, according to Red Hat’s 2022 Global Tech Outlook. While it’s outranked as a discrete spending category by the likes of security, cloud management, and cloud infrastructure, in reality, automation plays an increasing role in each of those areas. ... While organizations and individuals automate tasks and processes for a bunch of different reasons, the common thread is usually this: Automation either reduces painful (or simply boring) work or it enables capabilities that would otherwise be practically impossible – or both. “Automation has helped IT and engineering teams take their processes to the next level and achieve scale and diversity not possible even a few years ago,” says Anusha Iyer, co-founder and CTO of Corsha. ... Automation is central to the ability to scale – quickly, reliably, and securely – distributed systems, whether viewed from an infrastructure POV (think hybrid cloud and multi-cloud operations), application architecture POV, security POV, or though virtually any other lens. Automation is key to making it work.


CIO, CDO and CTO: The 3 Faces of Executive IT

Most companies lack experience with the CDO and CTO positions. This makes these positions (and those filling them) vulnerable to failure or misunderstanding. The CIO, who has supervised most of the responsibilities that the CDO and CTO are being assigned, can help allay fears, and benefit from the cooperation, too. This can be done by forging a collaborative working partnership with both the CDO and CTO, which will need IT’s help. By taking a pivotal and leading role in building these relationships, the CIO reinforces IT’s central role, and helps the company realize the benefits of executive visibility of the three faces of IT: data, new technology research, and developing and operating IT business operations. Many companies opt to place the CTO and CDO in IT, where they report to the CIO. Sometimes this is done upfront. Other times, it is done when the CEO realizes that he/she doesn't have the time or expertise to manage three different IT functions.. This isn't a bad idea since the CIO already understands the challenges of leveraging data and researching new technologies.


Log4j: The Pain Just Keeps Going and Going

Why is Log4j such a persistent pain in the rump? First, it’s a very popular, open source Java-based logging framework. So it’s been embedded into thousands of other software packages. That’s no typo. Log4j is in thousands of programs. Adding insult to injury, Log4j is often deeply embedded in code and hidden from view due to being called in by indirect dependencies. So, the CSRB stated that “Defenders faced a particularly challenging situation; the vulnerability impacted virtually every networked organization, and the severity of the threat required fast action.” Making matters worse, according to CSRB, “There is no comprehensive ‘customer list’ for Log4j or even a list of where it is integrated as a subsystem.”  ... The pace, pressure, and publicity compounded the defensive challenges: security researchers quickly found additional vulnerabilities in Log4j, contributing to confusion and ‘patching fatigue’; defenders struggled to distinguish vulnerability scanning by bona fide researchers from threat actors, and responders found it difficult to find authoritative sources of information on how to address the issues,” the CSRB said.


Major Takeaways: Cyber Operations During Russia-Ukraine War

The operational security expert known as the grugq says Russia did disrupt command-and-control communications - but the disruption failed to stymie Ukraine's military. The government had reorganized from a "Soviet-style" centralized command structure to empower relatively low-level military officers to make major decisions, such as blowing up runways at strategically important airports before they were captured by Russian forces. Lack of contact with higher-ups didn't compromise the ability of Ukraine's military to physically defend the country. ... Another surprising development is the open involvement of Western technology companies in Ukraine's cyber defense, WithSecure's Hypponen says. "I'm surprised by the fact that Western technology companies like Microsoft and Google are there on the battlefield, supporting Ukraine against governmental attacks from Russia, which is again, something we've never seen in any other war." Western corporations aren't alone, either. Kyiv raised a first-ever volunteer "IT Army," consisting of civilians recruited to break computer crime laws in aid of the country's military defense.



Quote for the day:

"Leadership is a way of thinking, a way of acting and, most importantly, a way of communicating." -- Simon Sinek