Daily Tech Digest - September 06, 2021

We are in an age of rapid technological progress. But many are not ‘progressing’

Even risk-averse companies that readily adapt and invest in new technologies and processes encounter hurdles. One example is what is known as the ‘productivity paradox‘, which is when anticipated gains in productivity and ROI are not fully realized straightaway. When Apple, Microsoft and Dell Computer arrived on the scene in the 1980s, computer usage was limited to early adopters or those who could afford a personal computer. They did not receive widespread consumer acceptance until the mid-1990s; now, computers and smart devices are an indispensable part of society. The benefits of the computer age are difficult to gauge in simple fashion. MIT’s Nobel Prize-winning economist Robert Solow stated during the internet boom of the 1990s: “You could see the computer age everywhere but in the productivity statistics.” Why? One explanation is that GDP is an imperfect measure for capturing meaningful data and translating technology’s impact on productivity, sustainability and overall well-being. The same can be said of the Gini coefficient used to measure income distribution and economic inequality among a huge swath of the population.


Zero-Trust Model Gains Luster Following Azure Security Flaw

In light of this coming tsunami, enterprises need to rethink their security strategies to embrace zero-trust and identity-based authentication. Both of those strategies are ones that experts recommend for dealing with risks like those posed by the ChaosDB vulnerability. And they will help prepare enterprises for future problems of the same kind, where much of the underlying architecture and processes are out of their control. "The cloud provider can become a single point of failure," said Dan Petro, lead researcher at security testing firm Bishop Fox. And as the industry moves even further toward serverless infrastructure, vulnerabilities like ChaosDB are likely to increase in occurrence and severity, he told Data Center Knowledge. "Anytime we have these highly visible, high-profile weaknesses, attackers are going to notice that, and it's going to inspire similar attacks, similar offensive research," said Mark Orlando, co-founder and CEO at Bionic Cyber; security operations instructor at the SANS Institute; and former security team manager at the Pentagon, the White House and the Department of Energy.


The common vulnerabilities leaving industrial systems open to attack

According to the research, industrial systems are especially open to attack when there’s a low level of protection around an external network perimeter that is accessible from the internet. Device misconfigurations and flaws in network segmentation and traffic filtering are also leaving the industrial sector particularly vulnerable. Lastly, the report also cites the use of outdated software and dictionary passwords as risky vulnerabilities. To uncover these insights, the researchers set out to actually imitate hackers and see what path they’d take to gain access. “When analyzing the security of companies’ infrastructure, Positive Technologies experts look for vulnerabilities and demonstrate the feasibility of attacks by simulating the actions of real hackers,” reads the report. “In our experience, most industrial companies have a very low level of protection against attacks.” Once inside the internal network, Positive Technologies found that attackers can obtain user credentials and full control over the infrastructure in 100% of cases. 


8 must-ask security analyst interview questions

For those who excel in cybersecurity, their interest in the topic is not a 9-to-5 thing; it’s a passion that pervades their everyday lives. To find out if that’s the case, Lindemoen likes to ask about the candidates’ home network setup. “I look for whether they’re using WPA2 vs. WPA and WEP and whether they set up a separate network for when guests use their home wireless network,” he says. “They’re simple things, but it provides some insight into how they think about security in their personal lives.” Lindemoen also asks about which cybersecurity conferences they’d most like to attend if they could, and why. Rather than naming a well-known conference, “they might mention one that’s in a niche they’re focused on or are truly passionate about.” Participation in capture-the-flag (CTF) and other cyber calisthenics events and activities is another good barometer, Glavach says. Because these programs are free, they can be even better about revealing passion than costly certifications are. “If there’s a candidate with no certifications but they participated in CTFs similar to a DEFCON CTF or a SANS Holiday Hack, that shows me they’re very committed,” he says.


10 Most Practical Data Science Skills You Should Know in 2022

It’s one thing to build a visually stunning dashboard or an intricate model with over 95% accuracy. BUT if you can’t communicate the value of your projects to others, you won’t get the recognition that you deserve, and ultimately, you won’t be as successful in your career as you should. Storytelling refers to “how” you communicate your insights and models. Conceptually, if you were to think about a picture book, the insights/models are the pictures and the “storytelling” refers to the narrative that connects all of the pictures. Storytelling and communication are severely undervalued skills in the tech world. From what I’ve seen in my career, this skill is what separates juniors from seniors and managers. ... A/B testing is a form of experimentation where you compare two different groups to see which performs better based on a given metric. A/B testing is arguably the most practical and widely-used statistical concept in the corporate world. Why? A/B testing allows you to compound 100s or 1000s of small improvements, resulting in significant changes and improvements over time.


How To Address Bias-Variance Tradeoff in Machine Learning

Bias and variance are inversely connected and It is nearly impossible practically to have an ML model with a low bias and a low variance. When we modify the ML algorithm to better fit a given data set, it will in turn lead to low bias but will increase the variance. This way, the model will fit with the data set while increasing the chances of inaccurate predictions. The same applies while creating a low variance model with a higher bias. Although it will reduce the risk of inaccurate predictions, the model will not properly match the data set. Hence it is a delicate balance between both biases and variance. But having a higher variance does not indicate a bad ML algorithm. Machine learning algorithms should be created accordingly so that they are able to handle some variance. Underfitting occurs when a model is unable to capture the underlying pattern of the data. Such models usually present with high bias and low variance. It happens when we have very little data to build a model or when we try to build a model with linear features making use of nonlinear data.


The benefits of Bare-Metal-as-a-Service for fintech

Dedicated servers are a better fit for resource-heavy apps. In the world of financial services, there’s a lot of transactions going on. Virtual machines are not the best choice for such an environment, since the “virtualisation tax” prevents you from using 100% of their capacity. Another issue is the distribution of the platform’s resources between users – when one of them uses too much of the server’s capacity, their neighbours pay for it. ... Bare metal solutions are often harder to order than a virtual machine, and you must wait longer for the server to be prepared for operation. Another issue is the management of the disparate infrastructure of dedicated servers, virtual machines and clouds when purchased from different providers. G-Core Labs’ new offering, Bare-Metal-as-a-Service, solves these problems. With this service, a user can get a ready-for-use dedicated server as easily as a virtual one. Just select the right features, connect a private or public network, or several networks at once, and in a few minutes, the physical server will be ready for use.


Israel’s fintech community readies for ‘dramatic’ changes in banking sector

The first calls for establishing “a unique regulatory sandbox” for fintech companies in which regulators will monitor their activities while hedging their risks, and allowing them to introduce products into the Israeli market to benefit consumers. The regulatory system proposal was coordinated by an inter-ministerial team led by the Justice and Finance ministries and included representatives from the Securities Authority, the Bank of Israel (BOI), the Capital Market Authority, the Anti-Money Laundering and Terrorist Financing Authority, and the Tax Authority. The second proposal — the one watched closely by Israeli fintech startups and the legacy banks — requires banks and financial entities to transfer information about their customers, with the customers’ approval, to technology firms that can provide these customers with information about the financial services they consume, how much exactly they are paying for them and how much they could save if they switch to another financial services provider.


5 Surefire Things That’ll Get You Targeted by Ransomware

Using a password manager has become a common practice for many, but it seems like there are a lot of people who unfortunately still don’t understand the risks. There are some valid concerns with using password managers in general—like losing access to your master file, having it fall into the wrong hands, or the issue with hosted services where your passwords are hosted by a third party. But all of those are minor compared to the issues that you’re bringing about by reusing passwords as an alternative. Sure, it’s convenient. But as soon as one of your accounts is compromised, you’re going to run into a lot of trouble on many fronts. And this happens more often than you might think; companies get attacked regularly, and credentials are leaked as a result. ... As an extension to the above, watch out for the kinds of contacts you make online. People might not be who they claim, and you should always keep an eye open for potential shady intentions. When you combine this with some of the above points, things can get quite scary. Some people might target you because they’ve gathered information about you from other sources, and they can make the whole interaction seem very natural and legitimate.


Utilising digital skills to tackle climate change

Upskilling is crucial to the major transition that the energy industry is currently going through. A 2020 report by EY on Oil and Gas Digital Transformation, found that 43% of respondents cite “too few workers with the right skills in the current workforce” as a major challenge to digital technology adoption. Upskilling will not only equip workers with new skills but also enable organisations to reach their digital transformation goals. By embracing the rapid change of innovation with upskilling, employers can take a proactive and agile approach to keep workforces engaged and employees focused on their own personal development. It’s not to say the skills that current workers hold are not useful for today’s needs, as many in energy industries possess transferable skills. Workers typically possess foundational knowledge in STEM fields and soft skills which can be integrated seamlessly into newer applications. For example, skills in the oil, gas and coal sectors can be brought into the growing renewable energy sector, offering a huge rise in job opportunities.



Quote for the day:

"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis

Daily Tech Digest - September 05, 2021

Digital State IDs Start Rollouts Despite Privacy Concerns

To assuage security fears that come with storing people’s identity on its devices, Apple is asserting that state DLs and IDs stored in Wallet on iPhone and Apple Watch will “take full advantage of the privacy and security” built into the devices, the company said. Apple’s mobile ID implementation supports the ISO 18013-5 mDL, or mobile driver’s license standard being used by the government for storing digital identities. Apple played an active role in developing the standard, which the company said sets clear guidelines for the industry about how to protect consumers’ privacy when presenting an ID or driver’s license through a mobile device, the company said. Moreover, Apple devices will encrypt ID data to protect it against potential theft by threat actors, with DLs and IDs stored in Wallet presented digitally through encrypted communication directly between the device and the identity reader, the company said. This precludes the need for users to unlock, show or hand over their device to someone. Additionally, the use of Face ID and Touch ID will ensure that only the person who added the ID to the device can present it or view it on the device, according to Apple.


6 cybersecurity training best practices for SMBs

SMB owners and staff may know what cybersecurity risks are making the rounds—phishing, for example—but do they understand why these risks matter to the organization and themselves? Do they know what's required to reduce the risk? "It's important to note that raising security awareness is the goal," Poriete said. "Security communication, culture and training are different types of methods that can be used to help SMEs get there." Each company has to decide whether to develop the training in-house or find a consultant specializing in cybersecurity to recommend or create a training program specific to the company's needs. ... Learning about cybersecurity can be complex, and instructors provide too much information more often than not. The person responsible for training must avoid overloading employees with information they're unlikely to remember. "Training shouldn't be a one-off exercise but a regular activity to help maintain employees' level of awareness," Poriete said. "Think short, sharp exercises so as not to interrupt their core work or create security fatigue."


How Uber is Leveraging Apache Kafka For More Than 300 Micro Services

Uber has overcome the pub-sub message queueing system issues by implementing features via a client-side SDK. In addition, the team chose a proxy-based approach. The engineering team has taken a multiple programming approach with Go, Java, Python, and NodeJS services. While traditionally different services would be written in other languages for the various client libraries, Consumer Proxy makes it possible to implement only one programming language applicable to all services. This approach also makes it easier for the team to manage the 1000 microservices that Uber runs. Since the message pushing protocols remain unchanged, the Kafka team can upgrade the proxy at any time without affecting other services. Consumer Proxy also assists in limiting the blasting radius of rebalancing storms as a result of the rolling restart. It rebalances the consumer group by decoupling message consuming nodes from the message processing services. The service can eliminate the effects of rebalancing storms itself by implementing its group rebalance logic.


Deleting unethical data sets isn’t good enough

Scraping the web for images and text was once considered an inventive strategy for collecting real-world data. Now laws like GDPR (Europe’s data protection regulation) and rising public concern about data privacy and surveillance have made the practice legally risky and unseemly. As a result, AI researchers have increasingly retracted the data sets they created this way. But a new study shows that this has done little to keep the problematic data from proliferating and being used. The authors selected three of the most commonly cited data sets containing faces or people, two of which had been retracted; they traced the ways each had been copied, used, and repurposed in close to 1,000 papers. In the case of MS-Celeb-1M, copies still exist on third-party sites and in derivative data sets built atop the original. Open-source models pre-trained on the data remain readily available as well. The data set and its derivatives were also cited in hundreds of papers published between six and 18 months after retraction. DukeMTMC, a data set containing images of people walking on Duke University’s campus and retracted in the same month as MS-Celeb-1M, similarly persists in derivative data sets and hundreds of paper citations.


Cleveland Clinic develops bionic arm that restores ‘natural behaviors’

It enables patients to send nerve impulses from their brains to the prosthetic when they want to use or move it, and to receive physical information from the environment and relay it back to their brain through their nerves. The artificial arm’s bi-directional feedback and control enabled study participants to perform tasks with a similar degree of accuracy as non-disabled people. “Perhaps what we were most excited to learn was that they made judgments, decisions and calculated and corrected for their mistakes like a person without an amputation,” said Dr Marasco, who leads the Laboratory for Bionic Integration. “With the new bionic limb, people behaved like they had a natural hand. Normally, these brain behaviors are very different between people with and without upper limb prosthetics.” Dr Marasco also has an appointment in Cleveland Clinic’s Charles Shor Epilepsy Center and the Cleveland VA Medical Center’s Advanced Platform Technology Center.


Can healthcare avoid another AI winter?

"AI winter" refers to a period of disillusionment with AI, marked by reduced investments and progress, which follow periods of high enthusiasm and interest in AI technology. There have been two AI winters: one between the mid-1980s and early 1990s and another in the late 1970s and early 1980s, in which expert systems and practical artificial neural networks rose to prominence. However, it became clear that these expert systems had limitations that prevented them from living up to expectations. This resulted in the second AI winter, a period of decreased AI research funding and a decline in general interest in AI. According to the Gartner Hype Cycle, we now are at risk of another AI winter in healthcare due to several AI solutions falling short of their initial hype, including natural language processing, deep learning and machine learning, which is decreasing trust in AI by users. Recent examples that highlight the growing concern over inappropriate and disappointing AI solutions include racial bias in algorithms supporting healthcare decision-making, unexpected poor performance in cancer diagnostic support or inferior performance when deploying AI solutions in real-world environments.


These 'technology scouts' are hunting for the next big thing in tech. Here's how they do it

Setting up a strategy for discovering emerging technologies might seem like a daunting task, especially for smaller organisations, but a growing number of tools are now being built to help. Mergeflow, for example, is a Germany-based startup that automates the process of hunting for innovation. "People come to us because they know that there is something somewhere," Florian Wolf, the founder of Mergeflow, tells ZDNet. "It's pretty much all in the web, but you can't collect and analyse all of the data by yourself. It takes too long. You need automation to do that." Mergeflow's software, which was used by BMW to build the company's tech radar, scans thousands of scientific and technological publications, patents, news, market analyses, investor activities and other data every day. Users can search for a concept or a category and immediately access hundreds of potential innovations that are related to their query. The company's algorithm also looks at startups and companies working on each specific innovation to find out how mature they are, based on data like venture funding or collaborations with other researchers and inventors.


How to manage the growing costs of cyber security

Technological solutions aren’t the be all and end all of cyber security, but they do play a major role in an organisation’s defences. This is truer now than ever, as organisations find innovative ways to use tech. Cloud services have shifted into the mainstream in recent years, and they will only become more popular as businesses embrace remote working. Consider the fact that employees are now spread across the country or even across the globe, meaning countless new organisational endpoints, each of which is vulnerable to an attack and must be protected. These defences rely on continuous, end-to-end monitoring and the ability to analyse threat data from multiple sources in real time. Threat monitoring tools should work in combination with a variety of other technologies – including anti-malware, encryption tools and firewalls as part of a holistic approach to security. But that’s only one part of the equation. For these tools to be effective, organisations need experts to implement them correctly and respond appropriately to the data they gather.

IT Leadership: 10 Ways to Unleash Enterprise Innovation

Innovation never sleeps. It evolves, it accelerates, it takes different forms. In fact, organizations that want to unleash innovation are wise to discover what stifles it so they can remove the constraints. For example, innovation historically resided in research and development (R&D) departments, but now organizational leaders are more inclined to behave as through innovation can come from anywhere. In fact, some organizations believe in democratizing innovation so much that they encourage experimentation, host competitions and may even provide financial incentives. According to Jeff Wong, global chief innovation officer at multinational professional services network EY, CEOs are realizing they can't rely on a traditional innovation team when the context of company's competition has changed. For example, retail banks used to compete against each other by stealing each other's accounts, but the same tactic won't work when the new competition is cryptocurrencies or a social network that offers stored value or investment alternatives.


Applying Genetic Engineering to your Organization Culture

Organizational culture is the organization’s behavioural blueprint; we can also call it Organization DNA. It includes the unspoken instructions of how one should behave as part of the organization, those are the human behaviour boundaries in the working environment. This concept of hidden behavioural codes that are unique for each organization has been demonstrated many times; when an employee from one organization joins another organization, they sense those codes and change their behavior. One of the greatest challenges in finding a suitable mechanism for manipulating the behavioural codes was that most of the genetic engineering concepts did not work at scale. Many of the methods needed very specific indicators in order to allocate specific cells that were candidates for manipulation. Deepening the investigation, we came across a field of science called epi-genetic. This field explored the environmental influence on DNA replication, and scientifically proved that cell manufacturing is influenced not only by the DNA blueprint, but also by the cell environment. 



Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman

Daily Tech Digest - September 04, 2021

AMD files teleportation patent to supercharge quantum computing

AMD has proposed a patent for 'teleportation,' meaning things could be about to get much more efficient around here. With the incredible technological feats humanity achieves on a daily basis, and Nvidia's Jensen going off on one last year about GeForce holodecks and time machines, it's easy for us to slip into a headspace that lets us believe genuine human teleportation is just around the corner. "Finally," you sigh, mouthing the headline to yourself. "Goodbye work commute, hello popping to Japan for an authentic Ramen on my lunch break." ... Essentially, the 'out-of-order' execution method AMD is looking to lay claim to ensures some Qubits that would be left idle—waiting for their calculation step to come around—are able to execute independent of a prior result. Where usually they would need to wait for previous Qubits to provide instructions, they can calculate simultaneously, no need to wait in line. So, no, we're not going to be zipping through wormholes just yet. But if AMD's designs come through, we could be looking at much more efficient, scalable and stable quantum computing architecture than we have now.


The Internet of Things Requires a Connected Data Infrastructure

Not long ago, a terabyte of information was an enormous amount and might be the foundation for solid decision-making. These days, it won’t cut it. For example, looking at a terabyte of data might yield a decision that’s 70% accurate. But leaving 30% to chance is unacceptable when it comes to real-time vehicle safety. On the other hand, having the ability to ingest and process 40 terabytes — from all sources, edge to core — can result in an accuracy rate well exceeding 90% accuracy. Something jumps in front of your car — is it a person, a dog, a trash bag, a child’s ball? Real-time systems need to determine the level of risk and react in micro milliseconds. Real-time processing has to be done closer to where the decisions are being made. In terms of IoT, a lot of questions can be answered by using a digital twin. These create additional layers of insights and provide a better understanding of what’s happening in any given situation and decide on the most appropriate course of immediate action. Digital twins take insight not just from the raw sensors — the edge compute nodes — but a combination of real-time data at the edge and historical data at the core.


Can Your Organization Benefit from Edge Data Centers?

Organizations considering a move to edge computing should begin their journey by inventorying their applications and infrastructure. It's also a good idea to assess current and future user requirements, focusing on where data is created and what actions need to be performed on that data. "Generally speaking, the more susceptible data is to latency, bandwidth, or security issues, the more likely the business is to benefit from edge capabilities," said Vipin Jain, CTO of edge computing startup Pensando. “Focus on a small number of pilot projects and partner with integrators/ISVs with experience in similar deployments." Fugate recommended examining business functions and processes and linking them to the application and infrastructure services they depend on. "This will ensure that there isn’t one key centralized service that could stop critical business functions," he said. "The idea is to determine what functions must survive regardless of an infrastructure or connectivity failure." Fugate also advised determining how to effectively manage and secure distributed edge platforms.

How to Speed Up Your Digital Transformation

The complexity-in-use is often overlooked in digitalization projects because those in charge think that accounting for task and system complexity independent of one another is enough. In our case, at the beginning of the transformation, tasks and processes were considered relatively stable and independent from the new system. As a result, the loan-editing clerks were unable to complete business-critical tasks for weeks, and management needed to completely reinvent their change management approach to turn the project around and overcome operational problems in the high complexity-in-use area. They brought in more people to reduce the backlog, developed new training materials, and even changed the newly implemented system — a problem-solving technique organizations with smaller budgets wouldn’t find easy to deploy. In the end, our study partner managed this herculean task, but it took them months to get the struggling departments back on track.


Ecosystems at The Edge: Where the Data Center Becomes a Marketplace

Rapidly evolving edge computing architectures are often seen as a way for businesses to enable new applications that require low latency and place computing close to the origin of data. While those are important use cases, what is less often discussed is the opportunity for businesses to leverage the edge to spawn ecosystems that generate new revenue. To realize this value, companies must think of the edge as more than just a collection point for data from intelligent devices. They should broaden their vision to see the edge as a new business hub. These small data centers can evolve into full-fledged service providers that attract local businesses, generate e-commerce transactions and enable interconnections that never touch the central cloud. Edge computing is an expansion of cloud infrastructure that moves data collection, processing and services closer to the point at which data is created or used. It is the fastest-growing segment of the cloud category with the total market expected to expand 37% annually through 2027, according to Grand View Research.


NSA: We 'don't know when or even if' a quantum computer will ever be able to break today's public-key encryption

In the NSA's summary, a CRQC – should one ever exist – "would be capable of undermining the widely deployed public key algorithms used for asymmetric key exchanges and digital signatures" – and what a relief it is that no one has one of these machines yet. The post-quantum encryption industry has long sought to portray itself as an immediate threat to today's encryption, as El Reg detailed in 2019. "The current widely used cryptography and hashing algorithms are based on certain mathematical calculations taking an impractical amount of time to solve," explained Martin Lee, a technical lead at Cisco's Talos infosec arm. "With the advent of quantum computers, we risk that these calculations will become easy to perform, and that our cryptographic software will no longer protect systems." Given that nations and labs are working toward building crypto-busting quantum computers, the NSA said it was working on "quantum-resistant public key" algorithms for private suppliers to the US government to use, having had its Post-Quantum Standardization Effort running since 2016. 

There are multiple ways that AI could become a detriment to society. Machine learning, a subfield of AI, learns from vast quantities of data and hence carries the risk of perpetuating data bias. AI use cases including facial recognition and predictive analytics could adversely impact protected classes in areas such as loan rejection, criminal justice and racial bias, leading to unfair outcomes for certain people. ... AI is only as good as the data that is used to train it. From an industry perspective, this is problematic given there is often a lack of training data for true failures in critical systems. This becomes dangerous when a wrong prediction leads to potentially life-threatening events such as manufacturing accidents or oil spills. This is why a focus on hybrid AI and “explainable AI” is necessary. ... Unfortunately, cybercriminals have historically been better and faster adopters of technology than the rest of us. AI can become a detriment to society when deepfakes and deep learning models are used as vehicles for social engineering by scammers to steal money, sensitive data and confidential intellectual property by pretending to be people and entities we trust.


Reviewing the Eight Fallacies of Distributed Computing

The challenges of distributed systems, and the broad science around the techniques and mechanisms used to build them, are now well researched. The thing you learn when addressing these challenges in the real world, however, is that academic understanding only gets you so far. Building distributed systems involves engineering pragmatism and trade-offs, and the best solutions are the ones you discover by experience and experiment. ... However, the engineering reality is that multiple kinds of failures can, and will, occur at the same time. The ideal solution now depends on the statistical distribution of failures; or on analysis of error budgets, and the specific service impact of certain errors. The recovery mechanisms can themselves fail due to system unreliability, and the probability of those failures might impact the solution. And of course, you have the dangers of complexity: solutions that are theoretically sound, but complex, might be far more complicated to manage or understand whenever an incident takes place than simpler mechanisms that are theoretically not as complete.


Machine Learning Algorithm Sidesteps the Scientific Method

We might be most familiar with machine learning algorithms as they are used in recommendation engines, and facial recognition and natural language processing applications. In the field of physics, however, machine learning algorithms are typically used to model complex processes like plasma disruptions in magnetic fusion devices, or modeling the dynamic motions of fluids. In the case of this work by the Princeton team, the algorithm skips the interim steps of needing to be explicitly programmed with the conventions of physics. “The algorithms developed are robust against variations of the governing laws of physics because the method does not require any knowledge of the laws of physics other than the fundamental assumption that the governing laws are field theories,” said the team. “When the effects of special relativity or general relativity are important, the algorithms are expected to be valid as well.” The researchers’ approach was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is actually a computer simulation.


What's the Real Difference Between Leadership and Management?

Leaders, like entrepreneurs, are constantly looking for ways to add to their world of expertise. They tend to enjoy reading, researching and connecting with like-minded individuals; they constantly aim to grow. They are usually open-minded and seek opportunities that challenge them to expand their level of thinking, which in turn leads to developing more solutions to problems that may arise. Managers, many times, rely on existing knowledge and skills by repeating proven strategies or behaviors that may have worked in the past to help maintain a steady track record within their field of success with clients. ... Leaders create trust and bonds between their mentees that go beyond expression or definition. Their mentees become raving fanatics willing to go above and beyond the usual scope of supporting their leader in achieving his or her mission. In the long run, the overwhelming support from his or her fanatics helps increase the value and credibility of the leader. On the other hand, managers direct, delegate, enforce and advise either an individual or group that typically represents a brand or organization looking for direction. Followers do as they are told and rarely ask questions. 



Quote for the day:

"Most people don't know how AWESOME they are, until you tell them. Be sure to tell them." -- Kelvin Ringold

Daily Tech Digest - September 03, 2021

What is a Botnet – Botnet Definition and How to Defend Against Attacks

Building a successful botnet requires thinking about what the goal is, whether it's creating a sustainable business plan, a target audience (whose devices are going to be infected, and what lure would appeal to them?), and processes to ensure the distribution and internal processes are secure. Then, a prospective botnet herder needs to start with a VPN service which takes anonymous forms of payment (possibly several services to rotate between). These services need to be unlikely to quickly hand over customer records and logs to any law enforcement agencies (a 'bulletproof' service). The next step is getting access to 'bulletproof' hosting (either a somewhat legitimate business which is *inefficient* at processing legal complaints or one specifically aimed at malware operators). Then, the herder needs domains from a registrar which will be unlikely to hand over customer information to law enforcement and which accepts anonymous methods of payment. Optionally, a herder can further disguise their activity with a technique like fast flux. Fast flux can either be single or double flux.


Soft Skills For Solution Architects — Moving Beyond Technical Competence

Solution Architects’ ability to Re-Imagine solution design, business processes, and customer journey along with Business Acumen would be one of the most important differentiators. You need to be innovative enough to design & deliver business functions while keeping business constraints, like time, budget, quality, and available human resources, in mind. Solution Architects need to challenge the existing processes and assumptions of the industry and reimagine new processes and the flow for customer journeys. Additionally, they need to possess the ability to emphasize customer experience over technology. Solution Architects need to shift the mindset and ensure that the product/service that the business offers is focused on decoding the needs and demands of their stakeholders rather than boating a technology that is difficult to traverse through. ... In the past, the Solution Architect role was seen as a bridge between Infra Architect, Network Architect, Security Architect, Storage Architect, Application Architect, and Database Architect. 


Low-Code and Open Source as a Strategy

Yes, there is a “but”. For instance, our system needs an existing database. The end application will also be database-centric, implying it’s typically for the most part only interesting for CRUD systems, where CRUD implies Create, Read, Update and Delete. However, the last figures I saw in regards to this was that there are 26 million software developers in the world. These numbers are a bit old, and are probably much larger today than a decade ago when I saw these figures. Regardless, the ratio is probably still the same, and the ratio tells us that 80% of these software developers work as “enterprise software developers.” An enterprise software developer is a developer working for a non-software company, where software is a secondary function. ... This implies that if you adopt Low-Code and Open Source as a strategy for your enterprise, you can optimize the way your software developers work by (at least) 5x, probably much more. Simply because at least 80% of the work they need to do manually is as simple as clicking a button, and waiting for one second for the automation process to deliver its result.


5 Rock-Solid Leadership Strategies That Drive Success

As a leader, one of the most important actions you can take is being fully engaged in your company. All too often, leaders lose touch with the nuts and bolts of their businesses. Many millenials tend to be over-delegators, and they delegate almost every component of their business to the point they are not able to make the right high-level decisions for their business. This is because they lack a clear understanding of what is happening at the ground level. The front-line workers of an organization tend to be the ones who are directly interacting with customers. When leaders rely on their executive team to find out front-line information, there is much that can get lost in translation. A fully engaged leader knows exactly what is happening on the front line of his or her company and doesn’t hide in an ivory tower and rely on others to get a pulse for the business. Full engagment in your company requires discipline as well as humility. A fully engaged CEO is one that regularly communicates directly to the front-line workers and listens carefully. 


Bluetooth Bugs Open Billions of Devices to DoS, Code Execution

One of the DoS bugs (CVE-2021-34147) exists because of a failure in the SoC to free resources upon receiving an invalid LMP_timing_accuracy_response from a connected BT device (i.e., a “slave,” according to the paper: “The attacker can exhaust the SoC by (a) paging, (b) sending the malformed packet, and (c) disconnecting without sending LMP_detach,” researchers wrote. “These steps are repeated with a different BT address (i.e., BDAddress) until the SoC is exhausted from accepting new connections. On exhaustion, the SoC fails to recover itself and disrupts current active connections, triggering firmware crashes sporadically.” The researchers were able to forcibly disconnect slave BT devices from Windows and Linux laptops, and cause BT headset disruptions on Pocophone F1 and Oppo Reno 5G smartphones. Another DoS bug (CVE pending) affects only devices using the Intel AX200 SoC. It’s triggered when an oversized LMP_timing_accuracy_request (i.e., bigger than 17 bytes) is sent to an AX200 slave.


9 notable government cybersecurity initiatives of 2021

In January, the US Department of Defense (DoD) released the Cybersecurity Maturity Model Certification (CMMC), a unified standard for implementing cybersecurity across the defense industrial base (DIB), which includes over 300,000 companies in the supply chain. The CMMC reviews and combines various cybersecurity standards and best practices, mapping controls and processes across several maturity levels that range from basic to advanced cyber hygiene. “For a given CMMC level, the associated controls and processes, when implemented, will reduce risk against a specific set of cyber threats,” reads the Office of the Under Secretary of Defense for Acquisition & Sustainment website. “The CMMC effort builds upon existing regulation (DFARS 252.204-7012) that is based on trust by adding a verification component with respect to cybersecurity requirements.” The CMMC is designed to be cost-effective and affordable for all organizations, with authorized and accredited CMMC third parties conducting assessments and issuing CMMC certificates to DIB companies at the appropriate level.


In-Memory Database Architecture: Ten Years of Experience Summarized

Tarantool also has an ACID transactions mechanism. Arrangements for single-threaded access to data enable us to achieve ‘serializable’ isolation level. When we call Arena, we can write to or read from it, or modify data. All that happens is done consecutively and exclusively in one thread. Two fibers cannot be executed in parallel. As far as interactive transactions are concerned, there is a separate MVCC engine. It makes it possible to execute interactive transactions in serializable mode; however, potential conflicts between transactions will need to be additionally handled. Apart from the Lua access engine, Tarantool has SQL. We have often used Tarantool as a relational database. We realized that we designed the database according to relational principles. We used spaces where SQL used tables. That is, each row is represented by a tuple. We have defined a schema for our spaces. It became clear to us that we can take any SQL engine, and just map primitives and execute SQL on top of Tarantool. In Tarantool, we can invoke SQL from Lua. We can either use SQL directly or call what was defined in Lua from SQL.


Low code cuts down on dev time, increases testing headaches

Ironically, the draw of low-code for many companies is that it allows anyone to build applications, not just developers. But when bugs arise citizen developers might not have the expertise needed to resolve those issues. “Low-code solutions that are super accessible for the end-user often feature code that’s highly optimized or complicated for an inexperienced coder to read,” said Max de Lavenne, CEO of Buildable, a custom software development firm. “Low-code builds will likely use display or optimization techniques that leverage HTML and CSS to their full extent, which could be more than the average programmer could read. This is especially true for low-code used in database engineering and API connections. So while you don’t need a specialized person to test low-code builds, you do want to bring your A-team.” According to Isaac Gould, research manager at Nucleus Research, a technology analyst firm, a citizen developer should be able to handle testing of simple workflows. Eran Kinsbruner, DevOps chief evangelist at testing company Perforce Software, noted that there could be issues when more advanced tests are needed. 


Digital transformation – it’s a people problem

Reinbold says that it is vital to “shrink the change you’re trying to accomplish” once momentum towards change has been achieved: “I’ve seen way too many efforts, declare some grandiose, ‘burn the boats’ type of initiatives like, ‘Everybody, for all time, is going to do this thing and only this thing’. “And as you might imagine, the amount of pushback to something like that is as absolutely proportional to the size of the change that is being asked for. It might be necessary, but in order to get traction, you have to build positive momentum.” His advice? Start with the uncontroversial stuff: “Ratify your process, whatever the means is – forgetting that thing accepted and communicated and monitored and policed – whatever that tiny thing is, have it be uncontroversial because you’re still figuring out how all of this works. ... The next step would be to script the critical moves. Your transformation efforts may make great viewing at 50,000 feet, but for employees in the trenches who might not understand where they are and where they need to be, the work they’re doing towards change could be confusing – and it might not make sense in their view.


Critical infrastructure today: Complex challenges and rising threats

Critical infrastructure systems face twin burdens of often having fewer resources to invest in cybersecurity, and the very critical nature of their operations, which attract adversaries and focus attention on any disruptions. When combined with the increasing connectivity of these resources and assets, organizations find themselves in a tough spot where they are targeted more often by adversaries ranging from criminal elements to state-directed entities. Low margins for error, high visibility (when systems fail or are compromised), and poor resourcing combine to make a complex defensive picture. ... Overall, current efforts appear to move the sector in the right direction by increasing focus and making resources available for defense. Where matters get tricky is the distinction between government-directed efforts and privately-owned infrastructure operators. Ultimately, government action short of legal mandates or similar actions will only go so far in addressing issues absent actions from critical infrastructure asset owners and operators. 



Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr

Daily Tech Digest - September 02, 2021

Cyber Security In Cars

ISO/SAE 21434, Road vehicles – Cybersecurity engineering, addresses the cybersecurity perspective in engineering of electrical and electronic (E/E) systems within road vehicles. It will help manufacturers keep abreast of changing technologies and cyber-attack methods, and defines the vocabulary, objectives, requirements and guidelines related to cybersecurity engineering for a common understanding throughout the supply chain. The standard, developed in collaboration with SAE International, a global association of engineers and a key ISO partner, draws on the recommendations detailed in SAE J3061, Cybersecurity guidebook for cyber-physical vehicle systems, offering more comprehensive guidance and the input of experts all around the world. Dr Gido Scharfenberger-Fabian, Convenor of the group of ISO experts that developed the standard, said it will enable organizations to define cybersecurity policies and processes, manage cybersecurity risk and foster a cybersecurity culture. “ISO/SAE 21434 will help consider cybersecurity issues at every stage of the development process and in the field, increasing the vehicle’s own cybersecurity defences and mitigating the risk of potential vulnerabilities for every component,” he said.


Ultimate Guide to Becoming a DevOps Engineer

The job title DevOps Engineer is thrown around a lot and it means different things to different people. Some people claim that the title DevOps Engineer shouldn’t exist, because DevOps is ‘a culture’ or ‘a way of working’—not a role. The same people would argue that creating an additional silo defeats the purpose of overlapping responsibilities and having different teams working together. These arguments are not wrong. In fact, some companies that understand and do DevOps engineering very well don’t even have a role with that name (like Google!). The truth is that whenever you see DevOps Engineer jobs advertised, the ad might actually be for an infrastructure engineer, a systems reliability engineer (SRE), a CI/CD engineer, a sysadmin, etc. So the definition for DevOps engineer is rather broad. One thing that’s certain though is to be a DevOps engineer, you must have a solid understanding of the DevOps culture and practices and you should be able to bridge any communication gaps between teams in order to achieve software delivery velocity. 


WhatsApp fined a record 225 mln euro by Ireland over privacy

A WhatsApp spokesperson said in a statement the issues in question related to policies in place in 2018 and the company had provided comprehensive information. "We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate," the spokesperson said. EU privacy watchdog the European Data Protection Board said it had given several pointers to the Irish agency in July to address criticism from its peers for taking too long to decide in cases involving tech giants and for not fining them enough for any breaches. It said a WhatsApp fine should take into account Facebook's turnover and that the company should be given three months instead of six months to comply. Europe's landmark privacy rules, known as GDPR, are finally showing some teeth even if the lead regulator for some tech giants appears otherwise, said Ulrich Kelber, Germany's federal commissioner for data protection and freedom of information. "What is important now is that the many other open cases on WhatsApp in Ireland are finally decided on so that we can take faster and longer strides towards the uniform enforcement of data protection law in Europe," he told Reuters.


DevOps, Low-Code and RPA: Pros and Cons

RPA programs enable companies to automate repetitive tasks by creating software scripts using a recorder. For those of us who remember using the macro recorder in Microsoft Excel, it’s a similar concept. Once the script is created, users can then use a visual editor to modify, reorder and edit its steps. Speaking to the growing popularity of these solutions was the UiPath IPO on April 21, 2021, which ended up being one of the largest software IOPs in history. The use cases for RPA programs are unlimited—any repetitive task done via a UI is a candidate. RPA is an area where we’ve seen an intersection of business-user designed apps (UiPath and Blue Prism) with more traditional DevOps tools specifically in the test automation space (Tricentis, Worksoft, and Egglplant) and new conversational-based solutions like Krista. In the case of test automation, a lightweight recorder is given to a business user who can then record a business process. The recording is then fed to the automation team, which creates a hardened test case that in turn is fed into a CI/CD system.


IBM quantum computing: From healthcare to automotive to energy, real use cases are in play

Quantum computers are better at that than classical computers, Utz said. Anthem is running different models on IBM's quantum cloud. Right now, company officials are building a roadmap around how Anthem wants to deliver its platform using quantum technology, so "I can't say quantum is ready for primetime yet," Utz said. "The plan is to get there over the next year or so and have something working in production." A good place to start with anomaly detection is in finding fraud, he said. "Classical computers will tap out at some point and can't get to the same place as quantum computers." Other use cases are around longitudinal population health modeling, meaning that as Anthem looks at providing more of a digital platform for health, one of the challenges is that there is "almost an infinite number of relationships," he said. This includes different health conditions, providers patients see, outcomes and figuring out where there are outliers, he said. "There's only so much a classical system can do there, so we're looking for more opportunities to improve healthcare for our members and the population at large," and the ability to proactively predict risk, Utz said. 


How to Implement Domain-Driven Design (DDD) in Golang

Domain-Driven Design is a way of structuring and modeling the software after the Domain it belongs to. What this means is that a domain first has to be considered for the software that is written. The domain is the topic or problem that the software intends to work on. The software should be written to reflect the domain. DDD advocates that the engineering team has to meet up with the Subject Matter Experts, SME, which are the experts inside the domain. The reason for this is because the SME holds the knowledge about the domain and that knowledge should be reflected in the software. It makes pretty much sense when you think about it, If I were to build a stock trading platform, do I as an engineer know the domain well enough to build a good stock trading platform? The platform would probably be a lot better off if I had a few sessions with Warren Buffet about the domain The architecture in the code should also reflect on the domain.

 

China’s Personal Information Protection Law and Its Global Impact

The law’s restrictions on cross-border data transfers may not affect retailers that operate domestically, and hence have no need to transfer information abroad. However, the story is vastly different for two types of companies: those in possession of large amount of personal information and those in possession of information on critical infrastructure. Moreover, PIPL declares that the authority of domestic regulators supersedes that of international treaties. PIPL will help foreign companies operating in China without cross-border data transfers to develop privacy policies in compliance with the law. Before PIPL, the lack of a domestic PI protection law led to the broad adoption of the EU’s GDPR as a privacy policy among foreign companies. However, the GDPR’s decision-making is based on agreements among EU member states, which does not apply in the case of China. Since PIPL will come into effect in November 2021, foreign firms in China will need to revise their privacy policies to fit the requirements of the new law.


10 Characteristics of an AI-Powered Enterprise

Digital transformation makes the inclusion of AI as part of the business strategy even more important than it would be otherwise because digital organizations are software companies. Since commercial applications and tools are increasingly taking advantage of AI, the logical development by extension is AI embedded in enterprise-built applications. After all, businesses are moving more data and compute to the cloud and their new applications are being designed as cloud-first applications. Of course, AI and machine learning tooling is also available in the cloud, so developers have what they need to build “intelligent” applications. AI and machine learning don't just work, however. They require testing and monitoring. “Losing trust in AI-infused applications is a high risk for AI-based innovation,” said Diego Lo Giudice, VP and principal analyst at Forrester, in a blog post. “Forrester Analytics data shows that 73% of enterprises claim to be adopting AI for building new solutions in 2021, up from 68% in 2020, and testing those AI-infused applications becomes even more critical.” Trust and safety are things that need to be proven through testing.


Why Rust is the best language for IoT development

Internet of Things (IoT) technology is rapidly terraforming the landscape of modern society right in front of our very eyes, and propelling us all into the future. It does this by providing solutions to everything from tracking your daily personal fitness goals with an Apple watch, to completely revolutionising the entire transport sector. These devices connect to each other and form the great network required for something like a digital twin; they are constantly collating data in real time from the surrounding environment which means that the system is always using entirely current information. As amazing and powerful as this technology is, it is slightly held back by the fact that, by their very nature, IoT devices have far less processing power than your average piece of equipment. This requires a much more efficient code to be written to fully take advantage of its raw potential without affecting the device’s performance. This is where Rust comes into the picture as one of the very few languages that can provide a faster runtime for IoT technology.


Are Tesla’s Dojo supercomputer claims valid?

The D1, according to Tesla, features 362teraFLOPS of processing power. This means it can perform 362 trillion floating-point operations per second (FLOPS), Tesla says. Now imagine harnessing the processing power of 25 D1 chips into a training tile, and then linking together 120 training tiles through multiple servers. That’s what Tesla is doing with the Dojo supercomputer for its autonomous cars. And with each training tile containing 9PFLOPS of computing power, Dojo has (by my possibly inaccurate calculations) 1.08 exaFLOPS of power under its hood (Tesla calls it 1.1EFLOPS). That kind of horsepower would make Dojo more than twice as fast as the currently acknowledged fastest supercomputer in the world, Fugaku. Built by Fujitsu, this supercomputer reaches speeds of 442PFLOPS. Supercomputers already are being used to accelerate medical research and drug development because they are capable of quickly processing massive amounts of data. Indeed, researchers have relied on supercomputers to power COVID-19 research since the pandemic began in early 2020.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing." -- Reed Markham

Daily Tech Digest - September 01, 2021

Top 3 API Vulnerabilities: Why Apps are Pwned by Cyberattackers

2021 is already the year of the API security incident, and the year is not over. API flaws impact the entire business – not just dev, or security or the business groups. Finger-pointing has never fixed the problem. The fix begins with collaboration; development needs a full understanding from business groups on how the API should function. API coding is different, so a refresh on secure coding practices is warranted. And security needs to be involved upfront, to help uncover gaps before publication. A great place to start is with the OWASP. It has published the API Security Top 10 and recently published the Completely Ridiculous API, which includes examples of bad APIs in an application. Organizations can use the Completely Ridiculous API online or in-house as an educational platform to train development and security on the errors to avoid when utilizing APIs. Whether you are utilizing an “API-first approach” or just starting your journey into digital transformation aided by APIs, knowing the vulnerabilities that are out there and what might happen if something is missed, is crucial.


How Tech Leaders Can Leverage Their Mentoring and Teaching with Coaching

Putting the focus on the other person means that we are encouraging them to do all of the work of coming up with a solution. We refrain from asking information gathering questions and instead ask questions that will help them solve the problem on their own. After all, anything that they have an answer to ... they already know! We want to help them make new connections in order to come up with new ideas that they didn’t have when they started talking to us. We also refrain from sharing our thoughts and opinions until they ask us for them directly or it is clear that they could benefit from some information that we have that they don’t. To aid in this, consider saying something early on in your conversation like, "I’m going to put my coaching hat on. I’m happy to share my expertise with you, but prefer to explore a bit first. If we get to the point where you really want to know my thoughts or I think of something that may be helpful to share, I can switch to my ‘expert’ hat."


All About Waymo’s AI-Powered Urban Driver

Waymo’s driving software is based on years of AI research, their Waymo Open Dataset Initiative, and research team Google Brain. The engineers working at Waymo operate in coordination with the Google Brain team to apply deep nets to the car’s pedestrian detection system. The team has created a robust, generalisable tech stack based on their operation in multiple environments and cities. The Waymo Driver has learnt to behave assertively and merge into traffic based on this experience. Waymo has invested in creating training softwares for the Waymo Driver. The Simulation City is software to test the autonomous vehicles and assess their performance for the cities Waymo is present in. It creates realistic conditions like spring showers, solar glare, or dimming light for the technology to experience; the researchers further learn from the system’s reactions. ... The Waymo Driver itself is trained with a highly nuanced understanding of city roads with driving experience of more than 20 million miles on public roads and 20 million miles in simulation. It can adapt to the local driving conditions accurately, given this training. 


Security engineer job requirements, certifications, and salary

IT has traditionally been a field that values skills over paper credentials—we all know the stories of tech pioneers who dropped out of high school—but that's changed over the years as the industry has become more professionalized. That said, most hiring managers do value experience and demonstrated skills, and if you can put together that sort of resume, that can help make up for a non-technical undergraduate degree. At any rate, nobody would make an immediate leap from college to a security engineer gig; you would need to pass through an introductory phase of your career first, possibly as a security analyst. One way to signal to your employer or potential future employers that you're ready to advance to a security engineer job is by pursuing some relevant formal certifications. ... One thing to keep in mind is that, while this is a tech job, it's not a job that's limited to the tech industry: just about every company that's larger than a handful of people, in every sector, needs security engineers. Government agencies and financial institutions in particular have a great need for security engineers, but you could also find yourself working in manufacturing or retail as well.


Why should I choose Quarkus over Spring for my microservices?

Quarkus can automatically detect changes made to Java and other resource and configuration files, then transparently re-compile and re-deploy the changes. Usually, within a second, you can view your application’s output or compiler error messages. This feature can also be used with Quarkus applications running in a remote environment. The remote capability is useful where rapid development or prototyping is needed but provisioning services in a local environment isn’t feasible or possible. Quarkus takes this concept a step further with its continuous testing feature to facilitate test-driven development. As changes are made to the application source code, Quarkus can automatically rerun affected tests in the background, giving developers instant feedback about the code they are writing or modifying. ... From the beginning, Quarkus was designed around Kubernetes-native philosophies, optimizing for low memory usage and fast startup times. As much processing as possible is done at build time. Classes used only at application startup are invoked at build time and not loaded into the runtime JVM, reducing the size, and ultimately the memory footprint, of the application running on the JVM.


Sustainable transformation of agriculture with the Internet of Things

With the urgency to prevent environmental degradation, reduce waste and increase profitability, farmers around the globe are increasingly opting for more efficient crop management solutions supported by optimization and controlling technologies derived from the Industrial Internet of Things (IIoT). Intelligent information and communication technologies (IICT) (machine Learning (ML), AI, IoT, cloud-based analytics, actuators, and sensors) are being implemented to achieve higher control of spatial and temporal variabilities with the aid of satellite remote sensing. The use and application of this set of related technologies are known as “Smart Agriculture.” In SA, real-time and continuous monitoring of weather, crop growth, plant physical/chemical variables, and other critical environmental factors allow the optimization of yield production, reduction of labor, and improvement of farming products. Practices such as irrigation management, resource management, production, or fertilization operations are being facilitated by integrating IoT systems capable of providing information about multiple crop factors.


Mainframes, ML and digital transformation

Moving from mainframes to client-server didn't just mean you went from renting one kind of box to buying another - it changed the whole way that computing worked. In particular, software became a separate business, and there were all sorts of new companies selling you new kinds of software, some of which solved existing problems but some of which changed how a company could operate. SAP made just-in-time supply chains a lot easier, and that enabled Zara, and Tim Cook’s Apple. New categories of software enabled new ways of doing business. The same shift is happening now, as companies move to the cloud - you go from owning boxes to renting them (perhaps), but more importantly you change what kinds of software you can use. If buying software means a URL, a login and a corporate credit card instead of getting onto the global IT department’s datacenter deployment schedule for sometime in the next three years, then you can have a lot more software from a lot more companies.


What’s next for data privacy in the UK?

Since the implementation of GDPR, there has been a surge in recruitment for roles like ‘head of data governance and privacy’. It’s time to seize this momentum and move to the next milestone – let’s call it GDPR+. GDPR+ needs to answer the question of how we protect and use data within the country and cross-border. Ideally, we need a Data Privacy Act and a cross-party overseer of the whole process whose remit spans all government departments – a kind of ‘data privacy czar’. Ideally this would be an individual with a strong background in data. The question that needs to be answered is how do we ensure businesses align their practices with any new regulation and handle data responsibly rather than selling it for their own gain? Data fiduciaries could be part of the solution; third-party organisations who are given the legal right to handle private data. But it needs to be a non-political government-funded third party. It’s most likely that the government would outsource any enforcement, but it’s pertinent to ask whether a private company would have the best interests of individual citizens.


Why you want what you want

Great marketers are certainly masters of mimetic manipulation. Burgis points to Edward Bernays, the public relations pioneer, as a prime example. In 1929, when the American Tobacco Company realized that breaking the taboo against women smoking in public could generate beaucoup revenue, it hired Bernays’s firm. He convinced 30 New York City debutantes to join the Easter parade and light up Lucky Strikes—and arranged to have them photographed. The next day, the photos of the debs smoking their “torches of freedom” appeared in newspapers across the country. Sales of Lucky Strikes tripled by the following Easter. ... Much of Wanting is devoted to translating and illustrating Girard’s theories in a consumable way, and Burgis does a fine job at that task. The book’s most salient point, even if it is somewhat opaque, is that leaders choose to pursue what Burgis calls transcendent desire: “Magnanimous, great-spirited leaders are driven by transcendent desire—desire that leads outward, beyond the existing paradigm, because the models are external mediators of desire. These leaders expand everyone’s universe of desire and help them explore it.”


Getting ahead of a major blind spot for CISOs: Third-party risk

As the industry has seen firsthand, even mature and well-established enterprise security teams have a lack of visibility into network hygiene of their branches, offices and contractors abroad due to varying security policies and protocols, management hierarchy and known pain points in franchised-based businesses. The same is applicable to their supply chain, where the level of network hygiene is typically a “black box” or something the third-party is simply not willing to discuss. Acquisition of the quantitative, historical and the most recent indicators of compromise is a vital component of TPRM, providing enterprise organizations actionable information to determine if a counterpart may be compromised with malware and what service may be potentially breached by it. This knowledge enables CISOs to make strategic and tactical decisions, as well as to communicate with other teams, including those responsible for vendor management and supply chain and the organization’s legal team.



Quote for the day:

"Leadership is an ever-evolving position." -- Mike Krzyzewski