Daily Tech Digest - December 09, 2020

The commodification of customer data privacy

B2B customers want personalized experiences, too. Aside from the data they might input into a contact form; B2B buyers put plenty of data online for the world to see. You can build a B2B buyer profiles just by gleaning data from their LinkedIn profile and their interactions online. Software exists that enable businesses to automate the process by scraping data from public sources. But it needs to be clear that this information is being collected and stored in good faith. Businesses should limit the amount of data they collect from customers, only using the data essential to their operations. Customers should always be made aware of what data is being collected, why, and how it will be used. This information should be easy to find and understand, not obfuscated by legal jargon and fine print. Some good examples of this are the “cookie” statements businesses place on their websites under the EU’s General Data Protection Regulation (GDPR). Finally, data must be stored in a secure environment, then erased when it is no longer being used. The customer should be made aware of what policies and protections are in place regarding the use of their data.


Unethical AI unfairly impacts protected classes - and everybody else as well

Why is ethics so important now with AI? Wherever there is a social context, anything involving people, ethical questions are necessary because it becomes personal. Before big data and data science, researchers categorized people into cohorts, or categories, such as tofu lovers with a college degree, or evangelical Christians. There wasn't enough data available at the individual level to draw inference on a single person. Even when evaluating a single person for credit or life insurance, the few available characteristics were used to compare with a larger group. What is different today is an avalanche intimate, personal detail, exacerbated by a shift in sources, from interval "operational exhaust" to a myriad of external, non-traditional data, such as pictures and videos that are not even vetted. In the wrong hands, with the wrong model, it can wreak havoc to people's lives. The capability to produce errant models and inferences and put them in production at a scale that is orders of magnitude greater than anything before compounds the potential adverse outcomes. Today, your "digital footprint," information about you on the internet, is so enormous that it is estimated the growth of your personal data on the internet is two megabytes per second.


Using deep learning to infer the socioeconomic status of people in different urban areas

Researchers at the Ecole Normale Superieure (ENS) de Lyon and Central European University (CEU) have recently developed a deep neural network that could be used to study the socioeconomic inequalities that can arise from urbanization. Their study, featured in Nature Machine Intelligence, confirms the potential of convolutional neural networks (CNNs) for the in-depth analysis of geographical regions. For many years, efficiently tracking urbanization, the process through which an urban area becomes increasingly large and populated, has proved fairly challenging. The development of increasingly advanced remote sensing and satellite technologies, however, opened up new exciting possibilities for the observation of specific geographical regions and consequently for urbanization-related research. In their study, the researchers ENS Lyon and CEU tried to use deep learning algorithms to analyze the images collected by these tools. "Our initial goal was actually to check what was the finest spatial resolution that we could get our algorithm (i.e., predicting the average income of an area based on its satellite image) to work with," Jacob Levy Abitbol and Marton Karsai, the researchers who carried out the study, told TechXplore.


Digital transformation: 4 ways to help IT teams adapt to disruption

Prioritize user adoption and buy-in. That includes understanding generational and workstyle differences of various users and establishing clear metrics around adoption, usage, and engagement. Analyzing the depth of communication and relationships that result from the collaborations will reduce communication gaps and breakdowns and provide a clear indication that the collaboration is working. ... IT leaders aiming for digital success must better identify future skills requirements, push for increased investment and uptake in skills acquisition, improve access to quality training to support future skills, and create an agile skills development system that can adapt to market needs to fuel a culture of lifelong learning. Sometimes those answers can come from within. ... This tells us we need a different kind of leadership, one in which leaders inspire rather than require. ... Adaptive design allows the transformation strategy and resource allocation to adjust over time. That includes flexible talent allocation, a key differentiator in a transformation’s success, and ensuring resources are earmarked for initiatives that span organizational silos. It’s also important to practice the art of simplicity by valuing what works well enough and accepting solutions that satisfy business needs – you can enhance a simple solution later on.


FireEye, a Top Cybersecurity Firm, Says It Was Hacked by a Nation-State

The F.B.I. on Tuesday confirmed that the hack was the work of a state, but it also would not say which one. Matt Gorham, assistant director of the F.B.I. Cyber Division, said, “The F.B.I. is investigating the incident and preliminary indications show an actor with a high level of sophistication consistent with a nation-state.” The hack raises the possibility that Russian intelligence agencies saw an advantage in mounting the attack while American attention — including FireEye’s — was focused on securing the presidential election system. At a moment that the nation’s public and private intelligence systems were seeking out breaches of voter registration systems or voting machines, it may have a been a good time for those Russian agencies, which were involved in the 2016 election breaches, to turn their sights on other targets. The hack was the biggest known theft of cybersecurity tools since those of the National Security Agency were purloined in 2016 by a still-unidentified group that calls itself the ShadowBrokers. That group dumped the N.S.A.’s hacking tools online over several months, handing nation-states and hackers the “keys to the digital kingdom,” as one former N.S.A. operator put it.


Dealing with Remote Team Challenges

Most of us are social creatures who enjoy the company of others. The concept of coming together to solve a common goal isn’t necessarily displaced by the concept of remote or distributed, but it can be trickier. There are opportunities for asynchronous communication, increased productivity through "flow" or uninterrupted time, and reduced travel and asset management costs. On the other hand, there are the challenges of equitable access, ensuring adequate resources and tooling as well as the need to address social isolation and the issue of trust. What seems to be happening more and more though is the shift away from a hierarchical structure to a more neural one with teams becoming smaller, more agile and cross-functional, as suggested by the May 2020 McKinsey Report. Mullenweg’s five stages of remote working suggest that those teams that have moved beyond trying to replicate the office model to be remote-first and truly asynchronous are edging closer to Nirvana, a state where distributed teams would consistently perform better than any in-person team. At this point, the creativity, energy, health and productivity of the team are at their peak with individuals performing at their highest level.


CIO interview: John Davison, First Central Group

“Intelligent automation means so much more for us than an efficiency tool,” says Davison. “We are building an entirely new technical competency into our business, so that it becomes part of our DNA. This not only changes operational execution but, importantly, changes the management mindset about the art of the possible and strategic decision-making.” The automated renewal process is another area where Blue Prism has been deployed. With the support of Blue Prism’s partner, IT and automation consultancy T-Tech, the First Central team can check for accuracy the issue of more than 3,000 renewal invitations daily in just two hours. The new process verifies each renewal notice, removing the need for costly, time-intensive manual work downstream to correct anomalies and reduce the risk of a regulatory incident.  Along with driving operational efficiencies, Davison believes RPA also boosts business confidence. “Risk mitigation is a lot more intangible, but can measure the cost of distraction and can measure our effectiveness from a robotics perspective,” he says. Davison’s team has established a robotics capability for the business capability. “It is not my job to close down operational risk,” he says.


The best programming language to learn now

The typed-language lovers are smart and they write good code, but if you think your code is good enough to run smoothly without the extra information about the data types for each variable, well, Python is ready for you. The computer can figure out the type of the data when you store it in a variable. Why make extra work for yourself? Note that this freewheeling approach may be changing, albeit slowly. The Python documentation announces that the Python runtime does not enforce function and variable type annotations but they can still be used. Perhaps in time adding types will become the dominant way to program in the language, but for now it’s all your choice. ... If you’re writing software to work with data, there’s a good chance you’ll want to use Python. The simple syntax has hooked many scientists, and the language has found a strong following in the labs around the country. Now that data science is taking hold in all layers of the business world, Python is following. One of the best inventions for creating and sharing interactive documents, the Jupyter Notebook, began with the Python community before embracing other languages.


Millions of IoT Devices at Risk From TCP/IP Stack Flaws

The research is a continuation of Forescout's exploration of TCP/IP stacks. In June, Forescout revealed the so-called Ripple20 flaws in a single but widely used TCP/IP stack made by an Ohio-based company, Treck. This time around, Forescout broadened its research into more types of TCP/IP stacks. The stacks enable basic network communication. Software developers don't develop their own but instead use off-the-shelf open-source stacks in their products or forks of those projects. "We discovered...33 vulnerabilities in four of seven [TCP/IP] stacks that we analyzed," Costante says. The flaws were found in uIP, FNET, PicoTCP and Nut/Net. Forescout also examined IwIP, CycloneTCP and uC/TCP-IP but didn't find any of the most common coding errors. But Forescout says it doesn't mean those TCP/IP stacks are necessarily free of problems. Many of the issues are centered around Domain Name System functionality. "We find that the DNS, TCP and IP sub-stacks are the most often vulnerable," Forescout says in its report. "DNS, in particular, seems to be vulnerable because of its complexity." Brad Ree, who is CTO of the consultancy ioXt and board member at the ioXt Alliance, says it is concerning to see the IPv6 vulnerabilities in Forescout's findings.


How Kali Linux creators plan to handle the future of penetration testing

The Kali Linux distribution, designed specifically for penetration testing and digital forensics, is still offered free of charge. Under her leadership OffSec has formed a dedicated Kali team and made quarterly releases since January 2019, which has received positive reviews from the community. “Kali and other projects like Exploit Database, the largest collection of exploits and vulnerabilities on the internet, keep us uniquely in tune with the needs of the security community and continue to inform our company direction,” she explained. But the thing she’s most proud of is that OffSec has become a company with a clear set of well-defined core company values: family, passion, integrity, community and innovation. “We live by these values as we scale, hire and operate. As a CEO, I found my own style through the support of our team members: have the courage to be authentic and vulnerable. We have cultivated an environment to embrace and practice a growth mindset, build vulnerability-based trust, and empower and enable our team to do their best. My job as CEO is about how to make our people happier in ways I or OffSec can influence.”



Quote for the day:

"Success consists of going from failure to failure without loss of enthusiasm." -- Winston Churchill

Daily Tech Digest - December 08, 2020

Cloud, containers, AI and RPA will spur a strong tech spending rebound in 2021

Not surprisingly, the ability to work remotely has been a critical factor. Forty-four percent of respondents cited Business Continuity Plans as a key factor. Several customers have told us, however, that their business continuity plans were far too focused on disaster recovery and as such they made tactical investments to shore up their digital capabilities. C-suite backing and budget flexibility were cited as major factors. We see this as a real positive in that the corner office and boards of directors are tuned into digital. They understand the importance of getting digital “right” and we believe that they now have good data from the past 10 months on which investments will yield the highest payback. As such, we expect further funding toward digital initiatives. Balance sheets are strong for many companies as several have tapped corporate debt and taken advantage of the low interest rate climate. Twenty-seven percent cited the use of emerging technologies as a factor. Some of these, it could be argued, fall into the first category – working remotely. The bottom line is we believe that the 10-month proof of concept that came from COVID puts organizations in a position to act quickly in 2021 to accelerate their digital transformations further by filling gaps and identifying initiatives that will bring competitive advantage.


Digital transformation teams in 2021: 9 key roles

“Data analytics is a good place to start with any transformation, to make sound decisions and design the proper solutions,” says Carol Lynn Thistle, managing director at CIO executive recruiting firm Heller Search Associates. One foundational IT position is the enterprise data architect or (in some cases) a chief data officer. These highly skilled professionals can look at blueprints, align IT tooling with information assets, and connect to the business strategy, Thistle explains. ... “Digital transformation is about automation of business processes using relevant technologies such AI, machine learning, robotics, and distributed ledger,” says Fay Arjomandi, founder and CEO of mimik Technology, a cloud-edge platform provider. “Individuals with business knowledge that can define the business process in excruciating detail. This is an important role, and we see a huge shortage in the market.” ... “[Organizations need] a digitally savvy person at the CXO level who will help other executives buy into the culture change that will be required to truly transform the organization into one that is digital-first,” says Mike Buob, vice president of customer experience and innovation for Sogeti, the technology and engineering services division of Capgemini.


Quantum Computing Marks New Breakthrough, Is 100 Trillion Times More Efficient

Jiuzhang, as the supercomputer is called, has outperformed Google’s supercomputer, which the company had claimed last year to have achieved quantum computing supremacy. The supercomputer by Google named Sycamore is a 54-qubit processor, consisting of high-fidelity quantum logic gates that could perform the target computation in 200 seconds. The researchers explored Boson sampling, a task considered to be a strong candidate to demonstrate quantum computational advantage. As the researcher cited in the research paper, they performed Gaussian boson sampling (GBS), which is a new paradigm of boson sampling, one of the first feasible protocols for quantum computational advantage. In boson sampling and its variants, nonclassical light is injected into a linear optical network, which generates highly random photon-number, measured by single-photon detectors. Researchers sent 50 indistinguishable single-mode squeezed states into a 100-mode ultralow-loss interferometer with full connectivity and random matrix. They further shared that the whole optical setup is phase-locked and that the sampling of output was done using 100 high-efficiency single-photon detectors.


Why Edge Computing Matters in IoT

The Edge basically means “not Cloud” because what constitutes the Edge can differ depending on the application. To explain, let’s look at an example. In a hospital, you might want to know the location of all medical assets (e.g., IV pumps, EKG machines, etc.) and use a Bluetooth indoor tracking IoT solution. The solution has Bluetooth Tags, which you attach to the assets you want to track (e.g., an IV pump). You also have Bluetooth Hubs, one in each room, that listens for signals from the Tags to determine which room each Tag is in (and therefore what room the asset is in). In this scenario, both the Tags and the Hubs could be considered the “Edge.” The Tags could perform some simple calculations and only send data to the Hubs if there’s a large sensory data change. ... One of the issues with the term ”IoT” is how broadly it’s defined. Autonomous vehicles that cost tens of thousands of dollars collect Terabytes of data and use 4G cellular networks are considered IoT. At the same time, sensors that cost a couple of dollars collect just bytes of data and use Low-Power Wide-Area Networks (LPWANS) are also considered IoT. The problem is that everyone is focusing on high bandwidth IoT applications like autonomous vehicles, the smart home, and security cameras. 


Could AI become dangerous?

When asked about the dangers of AI, Arman asserted that ‘danger has always existed in every technological innovation in history, from the ever-increasing trail of pollution caused by the first Industrial Revolution to the idea of Nuclear power generation to free use of pesticides everywhere into genetic modification of food and so on.’ AI is only a part of that as ‘it is on its path to outgrow human’s capacity to fully understand how it makes decisions and what is the base of its outcomes.’ Indeed, this would be the first time that our intellectual superiority would be taken away. To shed some light on this, Arman retells a conversation he had with one AI lead from key players in Silicon Valley during a meeting in 2017: ‘After 2 hours of discussing, brainstorming and trying to picture a path, we ended up having no firm idea on where AI was leading us. The final outcome was that each individually announced that they believe it is too early to predict anything and we can’t even say with certainty where we will be in 18 months. They also refused to acknowledge the risk that was brought up through research from my team projecting that – back in 2017, even with AI still being in its infancy – it had the ability to take away over 1 billion jobs across the globe.


What’s New on F#: Q&A With Phillip Carter

FP and Object-Oriented Programming (OOP) aren’t really at odds with each other, at least not if you use each as if they were a tool rather than a lifestyle. In FP, you generally try to cleanly separate your data definitions from functionality that operates on. In OOP, you’re encouraged to combine them and blur the differences between them. Both can be incredibly helpful depending on what you’re doing. For example, in the F# language we encourage the use of objects to encapsulate data and expose functionality conveniently. That’s a far cry from encouraging people to model everything using inheritance hierarchies, and at the end of the day you still tend to work with an object in a functional way, by calling methods or properties that just produce outputs. Both styles can work well together if you don’t “all in” on one approach or the other. ... What’s interesting is that even though F# runs on .NET, which often has an “enterprisey” kind of reputation, F# itself doesn’t really suffer the negative aspects of that kind of reputation. It can be used for enterprise work, but it’s usually seen as lightweight and its community is engaged and available as opposed to stuck behind a corporate firewall.


3 questions to ask before adopting microservice architecture

Teams may take different routes to arrive at a microservice architecture, but they tend to face a common set of challenges once they get there. John Laban, CEO and co-founder of OpsLevel, which helps teams build and manage microservices told us that “with a distributed or microservices based architecture your teams benefit from being able to move independently from each other, but there are some gotchas to look out for.” Indeed, the linked O’Reilly chart shows how the top 10 challenges organizations face when adopting microservices are shared by 25%+ of respondents. While we discussed some of the adoption blockers above, feedback from our interviews highlighted issues around managing complexity. The lack of a coherent definition for a service can cause teams to generate unnecessary overhead by creating too many similar services or spreading related services across different groups. One company we spoke with went down the path of decomposing their monolith and took it too far. Their service definitions were too narrow, and by the time decomposition was complete, they were left with 4,000+ microservices to manage. They then had to backtrack and consolidate down to a more manageable number.


IT careers: 10 critical skills to master in 2021

The key to adaptability, virtual collaboration, and digital transformation (and agile) is distributed leadership and self-managed teams. This requires that everyone have core leadership skills, and not just people in the positions of managers and above. For the past 11 years, I’ve been training and coaching IT professionals at every job level – from individual contributors up to CIOs – in what I believe are the six key core leadership skills that every IT professional needs to master, even more so today than at any time in the past. ... "Yes, IT professionals need to know the underpinnings of technology and tech trends. But what many fail to realize is how heavily IT leaders rely on effective communication skills to do their jobs successfully. As CIO of ServiceNow, my role demands clear, consistent communication – both within my organization and across other functions – to make sure that everyone is aligned on the right outcomes. Communication is the key to digital transformation and IT professionals need to communicate with employees across departments on what this means for their work.” - Chris Bedi, CIO, ServiceNow


How to industrialize data science to attain mastery of repeatable intelligence delivery

As you look at the amount of productive time data scientists spend creating value, that can be pretty small compared to their non-productive time — and that’s a concern. Part of the non-productive time, of course, has been with those data scientists having to discover a model and optimize it. Then they would do the steps to operationalize it. But maybe doing the data and operations engineering things to operationalize the model can be much more efficiently done with another team of people who have the skills to do that. We’re talking about specialization here, really. But there are some other learnings as well. I recently wrote a blog about it. In it, I looked at the modern Toyota production system and started to ask questions around what we could learn about what they have learned, if you like, over the last 70 years or so. It was not just about automation, but also how they went about doing research and development, how they approached tooling, and how they did continuous improvement. We have a lot to learn in those areas. For an awful lot of organizations that I deal with, they haven’t had a lot of experience around such operationalization problems. They haven’t built that part of their assembly line yet. 


What is neuromorphic computing? Everything you need to know about how it is changing the future of computing

First, to understand neuromorphic technology it make sense to take a quick look at how the brain works. Messages are carried to and from the brain via neurons, a type of nerve cell. If you step on a pin, pain receptors in the skin of your foot pick up the damage, and trigger something known as an action potential -- basically, a signal to activate -- in the neurone that's connected to the foot. The action potential causes the neuron to release chemicals across a gap called a synapse, which happens across many neurons until the message reaches the brain. Your brain then registers the pain, at which point messages are sent from neuron to neuron until the signal reaches your leg muscles -- and you move your foot. An action potential can be triggered by either lots of inputs at once (spatial), or input that builds up over time (temporal). These techniques, plus the huge interconnectivity of synapses -- one synapse might be connected to 10,000 others -- means the brain can transfer information quickly and efficiently. Neuromorphic computing models the way the brain works through spiking neural networks. Conventional computing is based on transistors that are either on or off, one or zero.



Quote for the day:

"Every great leader has incredible odds to overcome." -- Wayde Goodall

Daily Tech Digest - December 07, 2020

API3: The Glue Connecting the Blockchain to the Digital World

dAPIs are on-chain data feeds that are comprised of aggregated responses from first-party (API provider-operated) oracles. This allows for the removal of many vulnerabilities, unnecessary redundancies, and middleman taxes created by existing third-party oracle solutions. Further, using first-party oracles leverages the off-chain reputation of the API provider (compare this to the nonexistent reputation of anonymous third-party oracles). See our article “First-Party vs Third-Party Oracles” for a more extended treatise on these issues. Further, dAPIs are data feeds built with transparency. What we mean by this is: you know exactly where the data comes from — this ensures things like data quality as well as independence of data sources to mitigate skewness in aggregated results. Rather than having oracle-level staking — which is impractical and arguably infeasible for reasons alluded to in this article — API3 has a staking pool. API3 holders can provide stake to the protocol. This stake backs insurance services that protect users from potential damages caused by dAPI malfunctions. The collateral utility has the participants share API3’s operational risk and incentivizes them to minimize it. Staking in the protocol also grants stakers inflationary rewards and shares in profits.


Reconciling political beliefs with career ambitions

Data has been on the front lines in recent culture wars due to accusations of racial, gender, and other forms of socioeconomic bias perpetrated in whole or in part through algorithms. Algorithmic biases have become a hot-button issue in global society, a trend that has spurred many jurisdictions and organizations to institute a greater degree of algorithmic accountability in AI practices. Data scientists who’ve long been trained to eliminate biases from their work now find their practices under growing scrutiny from government, legal, regulatory, and other circles. Eliminating bias in the data and algorithms that drive AI requires constant vigilance on the part of not only data scientists but up and down the corporate ranks. As Black Lives Matter and similar protests have pointed out, data-driven algorithms can embed serious biases that harm demographic groups (racial, gender, age, religious, ethnic, or national origin) in various real-world contexts. Much of the recent controversy surrounding algorithmic biases has focused on AI-driven facial recognition software. Biases in facial recognition applications are especially worrisome if used to direct predictive policing programs or potential abuse by law enforcement in urban areas with many disadvantaged minority groups.


Why Data Privacy Is Crucial to Fighting Disinformation

In essence, if you can create a digital clone of a person, you can much better predict his or her online behavior. That’s a core part of the monetization model of social media companies, but it could become a capability of adversarial states who acquire the same data through third parties. That would enable much more effective disinformation. A new paper from the Center For European Analysis, or CEPA, also out on Wednesday, observes that while there has been progress against some tactics that adversaries used in 2016, policy responses to the broader threat of micro-targeted disinformation “lag.” “Social media companies have concentrated on takedowns of inauthentic content,” wrote authors Alina Polyakova and Daniel Fried. “That is a good (and publicly visible) step but does not address deeper issues of content distribution (e.g., micro-targeting), algorithmic bias toward extremes, and lack of transparency. The EU’s own evaluation of the first year of implementation of its Code of Practice concludes that social media companies have not provided independent researchers with data sufficient for them to make independent evaluations of progress against disinformation.” Polyakova and Fried suggest the U.S. government make several organizational changes to counter foreign disinformation.


How to assess the transformation capabilities of intelligent automation

We’re talking about smart, multi-tasking robots that are increasingly being trusted catalysts at the core of digital work transformation strategies. This is because they effortlessly perform joined up, data-driven work across multiple operating environments of complex, disjointed, difficult to modify legacy systems and manual workflows. And unlike any other robot, they deliver work without interruption, automatically making adjustments according to obstacles – different screens, layouts or fonts, application versions, system settings, permissions, and even language. These robots also uniquely solve the age old problem of system interoperability by reading and understanding applications’ screens in the same way humans do. They’re re-purposing the human interface as a machine-usable API – crucially without touching underlying system programming logic. This ‘universal connectivity’ also means that all current and future technologies can be used by robots – without the need of APIs, or any form of system integration. ... This capability breathes new life into any age of technology and enables these robots to be continually augmented with the latest cloud, artificial intelligence, machine learning, and cognitive capabilities that are simply ‘dragged and dropped’ into newly designed work process flows.


Basics of the pairwise, or all-pairs, testing technique

All-pairs testing greatly reduces testing time, which in turn controls testing costs. The QA team only checks a subset of input/output values -- not all -- to generate effective test coverage. This technique proves useful when there are simply too many possible configuration options and combinations to run through. Pairwise testing tools make this task even easier. Numerous open source and free tools exist to generate pairwise value sets. The tester must inform the tool about how the application functions for these value sets to be effective. With or without a pairwise testing tool, it's crucial for QA professionals to analyze the software and understand its function to create the most effective set of values. Pairwise testing is not a no-brainer in a testing suite. Beware these factors that could limit the effectiveness of all-pairs testing: unknown interdependencies of variables within the software being tested; unrealistic value combinations, or ones that don't reflect the end user; defects that the tester can't see, such as ones that don't reflect in a UI view but trigger error messages into a log or other tracker; and tests that don't find defects in the back-end processing engines or systems. 


How can companies secure a hybrid workforce in 2021?

Even before remote work was ubiquitous, accidental and malicious insider threats posed a serious risk to data security. As trusted team members, employees have unprecedented access to company and customer data, which, when left unchecked, can undermine company, customer, and employee privacy. These risks are magnified by remote work. Not only has the pandemic’s impact on the job market made malicious insiders more likely to capture or compromise data to gain leverage with new employment prospects or to generate extra income, but accidental insiders are especially prone to errors when working remotely. For example, many employees are blurring the lines between personal and professional technology, sharing or accessing sensitive data in ways that could undermine its integrity. In response, companies need to be proactive about establishing and enforcing clear data management guidelines. In this regard, communication is key, and accountability through monitoring initiatives or other efforts will help keep data protected during the transition.


Working from home dilemma: How to manage your team, without the micro-management

Employees need to feel connected and trusted. Yet leaders who find it tough to trust their workforce might opt for micro-management; they'll continue to check-up on their workers rather than checking-in to see how they're getting on. Peterson says leaders should look to develop a management style that cultivates wellbeing. In uncertain times, employees need a sense of certainty from their leaders. Executives should ensure their staff feel engaged, not micro-managed. "It's more important than ever for managers to ask whether people are getting their ABCs: their autonomy, belonging and competence. Leaders who don't get that from their own boss will tend to overcompensate with the people they're managing; they'll micro-manage, and that's not helpful," he says. Lily Haake, head of the CIO Practice at recruiter Harvey Nash, agrees that leaders who micro-manage will struggle in the new normal. They won't get the best from the workers and their effectiveness will suffer. Haake says managers who want to cultivate wellbeing need to pick up on subtle signs that all isn't well. Executives should adopt a considered approach, using a technique like active listening, to pick up on potential issues before they become major problems.


The Fourth Industrial Revolution: Legal Issues Around Blockchain

Stakeholders in blockchain solutions will need to ensure that their products comply with a legal and regulatory framework that was not conceived with this technology in mind. From a commercial law standpoint, smart contracts must be contemplated for negotiation, execution and administration on a blockchain, and in a legal and compliant fashion. Liability needs to be addressed. What if the contract has been miscoded? What if it does not achieve the parties' intent? The parties must also agree on applicable law, jurisdiction, proper governance, dispute resolution, privacy and more. There are public policy concerns that should be taken into account in shaping new laws, rules and regulations. For example, permissionless blockchains can be used for illegal purposes such as money laundering or circumventing competition laws. Also, participants may be exposed to irresponsible actions on the part of the "miners" who create new blocks. Unfortunately, there aren't any current legal remedies for addressing corrupt miners. As lawyers and technologists ponder these issues, several solutions are being bandied about. One possible remedy involves a hybrid of permissioned and permissionless blockchains.


Why enterprises are turning from TensorFlow to PyTorch

PyTorch is seeing particularly strong adoption in the automotive industry—where it can be applied to pilot autonomous driving systems from the likes of Tesla and Lyft Level 5. The framework also is being used for content classification and recommendation in media companies and to help support robots in industrial applications. Joe Spisak, product lead for artificial intelligence at Facebook AI, told InfoWorld that although he has been pleased by the increase in enterprise adoption of PyTorch, there’s still much work to be done to gain wider industry adoption. “The next wave of adoption will come with enabling lifecycle management, MLOps, and Kubeflow pipelines and the community around that,” he said. “For those early in the journey, the tools are pretty good, using managed services and some open source with something like SageMaker at AWS or Azure ML to get started.” ... “The TensorFlow object detector brought memory issues in production and was difficult to update, whereas PyTorch had the same object detector and Faster-RCNN, so we started using PyTorch for everything,” Alfaro said. That switch from one framework to another was surprisingly simple for the engineering team too.


Techno-nationalism isn’t going to solve our cyber vulnerability problem

Techno-nationalism is fueled by a complex web of justified economic, political and national security concerns. Countries engaging in “protectionist” practices essentially ban or embargo specific technologies, companies, or digital platforms under the banner of national security, but we are seeing it used more often to send geopolitical messages, punish adversary countries, and/or prop up domestic industries. Blanket bans give us a false sense of security. At the same time, when any hardware or software supplier is embedded within critical infrastructure – or on almost every citizen’s phone – we absolutely need to recognize the risk. We need to take seriously the concern that their kit could contain backdoors that could allow that supplier to be privy to sensitive data or facilitate a broader cyberattack. Or, as is the lingering case with TikTok, the concern is whether the collection of data on U.S. citizens via an entertainment app could be forcibly seized under Chinese law and enable state-backed cyber actors to then target and track federal employees or conduct corporate espionage.



Quote for the day:

"Stand up for what you believe, let your team see your values and they will trust you more easily." -- Gordon Tredgold

Daily Tech Digest - December 06, 2020

Ransomware Set for Evolution in Attack Capabilities in 2021

“The Maginot line of cybersecurity transformation failed as the first adopters were the e-crime groups and cybercrime cartels, and we just have to pay attention now as perimeter defenses have failed and continue to fail, and visibility and hardening has become an extreme challenge. Most attacks you see today are attacks from the inside out – digital insiders using trusted ecosystems to leverage ransomware attacks and espionage and crime campaigns.” Looking at ransomware in particular, the trio said they do not see this stopping or slowing down “and we continue to predict that this is going to extend significantly,” Foss said. He claimed ransomware groups have brought more people into their groups and are making sure they are getting trusted people, with nation state adversaries taking part as well. “We see this reaching out to additional operating systems; traditionally this has only impacted Windows primarily, but with MacOS having such a market reach in the professional ecosystem of most organizations, we predict it will be targeted as well,” Foss said. “Linux is one we have started to see more campaigns begin to target, and a lot are looking at defacing webpages in addition to taking over core components of ecosystems that these companies operate.”


Rethinking Robotic Process Automation (RPA)

You can't converse much with anyone these days about automation without talking RPA. It seems the little bots are getting everywhere. It's almost like an alien invasion! But always, the talk seems to be about creating and imposing bots on us. A bot for this and a bot for that, pretty soon you have dozens of little creatures (think about all the little gremlins in the film of the same name!) all nibbling away at pieces of your work. Helpful they maybe, but at what cost? In the UK and USA, as we came out of the 2008 financial crisis, economists were left scratching their heads. They were wrestling with what they call the productivity puzzle. Historically economic growth was always been closely tied to productivity, e.g., if output per worker does not grow, then the economy does not grow. In the UK, productivity was actually lower than before the crisis hit. So if productivity growth is required, it only stands to reason that tools to increase productivity are a useful thing to have. (I know I am oversimplifying, but I think it works for where we are going). What if RPA, instead of being about Robotic Process Automation instead became about Robotic Process Assistants. In this new world, we would each have just one robot on our desktop/laptop/machine, a little like Automator on a Mac.


Quantum Sensors Will Revolutionise The Tech Industry

Measurement devices that exploit quantum properties have been around for a while, such as atomic clocks, laser distance meters, and magnetic resonance imaging used for medical diagnosis. What can now be considered new is that individual quantum systems, like atoms and photons, are increasingly used as measurement probes. The entanglement and manipulation of quantum states are used to improve the sensitivity, even beyond the limit set by a conventional formulation of the quantum mechanical uncertainty principle. Yet, many scientists believe that quantum will enjoy its first real commercial success in sensing. That’s because sensing can avail the very characteristic that makes building a quantum computer so difficult-the extraordinary sensitivity of quantum states to the environment. Whether they respond to the gravitational pull of buried objects or picking up magnetic fields from the human brain, quantum sensors can recognize a wide range of tiny signals across the world. Some physicists believe that gravity-measuring quantum sensors, in particular, will become more widespread quickly with a potential market of USD 1 billion a year.


Banking to groceries — Data Protection Authority has multi-sector role, but must be efficient

First, the Data Protection Authority should follow a risk-based approach that is implicitly present in the Bill. For example, in many places, the Bill requires the DPA to consider the risk of harm to consumers while framing regulations. Additionally, the Bill categorises data into personal data, sensitive personal data, and critical personal data to differentiate the varying levels of risks that emanate from the misuse of data. Finally, the Bill creates a differential level of regulation between ordinary firms that use data, significant data fiduciaries, and small entities. These point to the fact that risk-based regulation must be inherent to the DPA’s strategic approach. Within this overall framework, the DPA can prioritise its resources by focusing on processing sensitive and critical personal data, and by overseeing significant data fiduciaries. This will allow the DPA to first build capacity in areas that pose the greatest threat to consumers, rather than expending its limited resources to regulate all sectors of economic activity. The DPA can further sharpen its focus by having a low threshold for exempting small entities. This will allow the DPA to focus its regulatory capacity towards firms that pose a larger risk to consumers by collecting and processing large volumes of data.


Australia’s Global RegTech Hub Poised for Growth

Like most businesses, local RegTechs have experienced disruption during the COVID-19 pandemic. The biggest challenge has been an immediate reduction in revenue. A contributing factor is the slowing of export opportunities, following travel restrictions and the postponement of trade events. Nonetheless, Australian RegTechs remain positive about future growth and continue to seek growth capital to fund product development, talent acquisition and market expansion. The pandemic has accelerated a shift towards remote working and digital interactions, increasing the risk of fraud and financial crime, and focusing organisations on the importance of robust cybersecurity. At the same time, Federal and State Governments are recognising the potential of RegTech to efficiently and effectively solve regulatory and compliance challenges, and to become a signature export for Australia. This, combined with regulatory pressure for all regulated entities across a range of industries to adopt RegTech, will create a strong platform for the sector to excel. ... Collectively, these actions will help Australian RegTechs to scale, creating local jobs, and supporting the export of Australian solutions.


Novel Online Shopping Malware Hides in Social-Media Buttons

The imposter buttons look just like the legitimate social-sharing buttons found on untold numbers of websites, and are unlikely to trigger any concern from website visitors, according to Sansec. Perhaps more interestingly, the malware’s operators also took great pains to make the code itself for the buttons to look as normal and harmless as possible, to avoid being flagged by security solutions. “While skimmers have added their malicious payload to benign files like images in the past, this is the first time that malicious code has been constructed as a perfectly valid image,” according to Sansec’s recent posting. “The malicious payload assumes the form of an html <svg> element, using the <path> element as a container for the payload. The payload itself is concealed utilizing syntax that strongly resembles correct use of the <svg> element.” To complete the illusion of the image being benign, the malicious payloads are named after legitimate companies. The researchers found at least six major names being used for the payloads to lend legitimacy: facebook_full; google_full; instagram_full; pinterest_full; twitter_full; and youtube_full. The result of all of this is that security scanners can no longer find malware just by testing for valid syntax.


Embedding Trust at the Core of Critical Infrastructure

Technology is no longer an extension of critical infrastructure, but rather at the core of it. The network sits between critical data, assets, and systems, and the users and services that leverage or operate them. It is uniquely positioned not only to add essential visibility and controls for resiliency, but also a well-placed and high-value target for attackers. Resiliency of the network infrastructure itself is crucial. Resilience is only achieved by building in steps to verify integrity with technical features embedded in hardware and software. Secure boot ensures a network device boots using only software that is trusted by the Original Equipment Manufacturer. Image signing allows a user to add a digital fingerprint to an image to verify that the software running on the network has not been modified. Runtime defenses protect against the injection of malicious code into running network software, making it very difficult for attackers to exploit known vulnerabilities in software and hardware configurations. Equally important, vendors must use a Secure Development Lifecycle to enhance security, reduce vulnerabilities, and promote consistent security policy across solutions. All of this might sound like geek mumbo-jumbo, but these are non-negotiables in today’s world. 


Out on the edge: The new cloud battleground isn’t in the cloud at all

The big cloud providers are all pursuing similar paths to the edge, anchored by the on-premises versions of their cloud infrastructure that have started rolling out this year. AWS’ Outposts, which was built for use within customer data centers, is also the foundation for AWS Local Zones and AWS Wavelength, which are miniature versions of the cloud giant’s technology stack that live in small, local data centers and telecommunications carriers’ point-of-presence facilities. The company says the experience it gained building out its retail e-commerce business lends itself perfectly to edge computing. “We already have more IoT devices connected to the cloud than any other cloud provider by a large margin. We have to do that for ourselves,” ‘said AWS’ Vass. Customers can employ such Amazon inventions as AWS Greengrass for IoT devices, AWS Snowball for storage and AWS Robomaker for development of robotic devices using Lambda serverless functions “on a POP, in a Local Zone and in the cloud, manage it all centrally and do decentralized execution,” he said. Microsoft’s Azure cloud edge strategy uses a similar approach. Edge Zones, which the company rolled out early this year, are essentially scaled-down Azure data centers located within miles of a customer. 


Is RPA the same as AI? What’s the Difference, and What Are the Use Cases?

RPA uses software robots to automate human actions in business processes that involve interaction with digital systems. These actions are usually simple and repetitive, which makes them prone to human error and can provoke a loss of employees’ motivation and efficiency. Software robots and RPA on the other hand bring notable benefits: accuracy (by minimizing human error), reliability (by being always available and by reducing delay), traceability (by providing audit trails and logs), and productivity (by increasing processing speed). A few examples of use cases are automating orders, processing payroll, customer onboarding, data validation, etc. ... Artificial intelligence “combines the human capacities for learning, perception, and interaction [...] at a level of complexity [and automation] that ultimately supersedes our own abilities.” It is a spectrum of technologies (e.g., natural language processing, computer vision, predictive modeling, data clustering, and many more) that opens new use cases for businesses, as well as reduces entry cost for many existing business problems that still require too much human intervention. ... In order to tackle these use cases and leverage the benefits of AI in business, using a data science and machine learning platform is a best practice. — it is the key to successfully scaling AI projects and to bringing a robust data methodology to all levels of the business.


When Is It Time to Retire Your Legacy System and Go Cloud?

When your tried-and-tested technology becomes unwieldy and impacts your bottom line, upgrading is critical to fit the business. Let's say you're a construction company that uses an obsolete legacy proof-of-delivery (PoD) system. The system requires three full-time customer service specialists to manage the application (e.g., find the right documents, send them over to customers, work with invoices, and so on). Due to the use of old-school tech, making a single change or adding a new feature is costly and time-consuming. On the other hand, the risk of human error is high and can result in unhappy customers, overheads, and delayed payments. Furthermore, customers call you to request their PoD, and the number of monthly calls now exceeds 1,000 and requires a lot of manual labor. This is a telltale sign that your traditional processes aren't effective which badly affects your entire business. Creating a Cloud-based and easy-to-use PoD portal would ensure maximum automation of all relevant processes, elimination of customer calls or their reduction to the minimum, and significant time- and cost-saving and increased efficiency.



Quote for the day:

"Anger and intolerance are the enemies of correct understanding." -- Mahatma Gandhi

Daily Tech Digest - December 05, 2020

What Tech Jargon Reveals about Bias in the Industry

Tech language was developed back in the early days of modern computing during a time when globally racism was much more explicit and often went unchallenged. But there is no reason we can’t change that language. It’s not embedded in the code itself; it’s just how we talk about these concepts. I recently heard of an example where a team of coders working on a solution had to go through the “blacklist and whitelist” of terms/commands for a specific product. The “blacklist” was terms/commands they couldn’t or shouldn’t use while the “whitelist” was stuff that’s OK. Because of the Black Lives Matter movement and what’s in the news, they noticed these terms in a new light for the first time and changed the language they were using to avoid using those racialized terms. It’s easy to just use different words, so why not? It’s an easy low-cost, low-tech solution to change language and improve output. Recently, Microsoft removed terms like these from their documentation. Cloudflare is debiasing some of the terms used in their coding. There are no reasons why such simple conscious actions can’t be undertaken for the benefit of us all. The benefits of diversity are widely stated. But they’re actually only available to companies when they include people. 


How to make remote pair programming work

When you cannot assemble a team physically, turn to pair programming remotely. But to see the benefits of remote pair programming, approach the practice systematically with one of the following styles: unstructured, driver/navigator or ping-pong. Plan pair programming remotely with decisions about the team's skill level: Should novices work with experts, or is a different approach better? Editor's note: Interest in remote pair programming has risen during the global COVID-19 pandemic. Developers working in distributed, at-home settings for the first time should also check out tips to empower productive remote dev teams, and insights into psychological safety when the workplace is suddenly part of home life. ... Most pair programming relationships fall into the unstructured style, where two programmers work together in an ad hoc manner. With the collaboration being loosely guided, both programmers should have matching skill levels. A common variant of this style is the unstructured expert-novice pair, where an expert programmer guides a novice. An unstructured approach is hard to discipline and unlikely to persist on longer projects. Unstructured pair programming is also harder to sustain remotely than styles with established guidelines.


Enterprise Architecture: What It Is, Why It's Important And How To Talk About It

The enterprise architect’s ultimate goal is to enact effective and measurable change. To do so, architects work to create not just a complete picture of the organization, but also roadmaps that represent different desired future states. By mapping out the paths to desired future states, they can decide the best path to take—with metrics to back up that decision showing how much better the organization will operate once changes are made. With precise understanding of the tradeoffs that come with each potential scenario, architects can propose multiple solutions in line with changing strategies. These scenarios can be optimized for different business outcomes, like growth, cost optimization, risk reduction, etc., and ultimately drive important business decisions that can be confidently backed with data. Modern enterprise architecture tools go far beyond the old-school perceptions of EA as a simple visualization tool, and now include dynamic and collaborative data that supports the different ways to model future states. One example of enterprise architecture in action comes from New Zealand’s largest retail grocery organization, Foodstuffs, which implemented enterprise architecture to help it stay agile and competitive.


Intel details Horse Ridge II as helping overcome quantum computing hurdle

Horse Ridge II, Intel says, supports "enhanced capabilities and higher levels of integration for elegant control of the quantum system". New features include the ability to manipulate and read qubit states and control the potential of several gates required to entangle multiple qubits. Horse Ridge II builds on the first-generation system-on-chip (SoC) ability to generate radio frequency pulses to manipulate the state of the qubit, known as qubit drive. "With Horse Ridge I, we essentially were able to drive the qubit, basically apply signals that would manipulate the state of the qubit between 0-1; with Horse Ridge II, we can not only drive the qubit, but we can read out the state of the qubit, we can apply pulses that would allow us to control the interaction between two qubits, and so we've added additional controller capabilities to Horse Ridge II," Clarke said. "We have a very programmable filter that would allow us to send a variety of different pulse shapes to control our qubits, we have an integrated microcontroller, we have a lot of DACs -- digital to analogue controllers -- that would allow us to control the individual qubits to a greater extent and these DACs would otherwise be discrete boxes in a control rack external to the refrigerator, so we're starting to take some of these boxes and put them into our SoC inside of our qubit refrigerator."


What organisations should expect next in the evolution of data

Before Covid-19 hit, data was already becoming fundamental to organisations’ future success. That journey has been supercharged. According to new research from Druva, 79% of IT decision makers in the US and UK now see data management and protection as key to competitive advantage. Similarly, 73% say they rely more heavily on data for business decisions, while 33% believe its value has permanently increased since the pandemic began. Therefore, if the message for IT leaders on their data strategy pre-pandemic was ‘get moving’, in 2021 it will be ‘go faster’. As the move towards a digitally-led future gathers pace, we’ll see a growing number of organisations move to make data a pervasive part of everything, from operational decision-making to customer experiences. Rapid availability and analysis will be vital. That’s not to say this transformation comes without risk. The same Druva research found 73% of IT decision makers have become more concerned about protecting their data from ransomware, and rightly so. Many report a year-on-year increase in phishing, malware and ransomware attacks. With large numbers of people working outside the office and some high-profile recent successes for cyber criminals, we can expect this threat to grow further in 2021.


Blockchain Attempts To Secure The Supply Chain

Counterfeiting is a real and growing problem. “We have several customers who are very concerned about counterfeiting and other security issues, and they are thinking of multiple ways to secure their ICs and systems,” said Geoff Tate, CEO of Flex Logix. This is partly the role of identity, but identity may not be sufficient without the further knowledge of the history of the item. And that history can involve an enormous range of considerations. How much to include must balance the cost of tracking and storing data about huge numbers of individual components and systems against the consequences of having too little historical information. “Blockchains provide a convenient means to permanently record transactions, and they have application to the provenance of components,” said John Hallman, product manager for trust and security at OneSpin Solutions. Dave Huntley, business development at PDF Solutions and co-chair of three SEMI committees/task forces, elaborated further. “When a new asset like a package is assembled, it is enrolled as a brand-new asset on the blockchain, along with its bill of materials,” he said. “You now have a genealogy, and you could take a module from a car, open it up, figure out the printed circuit board and slide it out, open that up, look at the packages inside, open one of them up, and look at the die inside. ...”


Building real cyber resiliency in government

While threats are constantly evolving, Branko Bokan, a cybersecurity specialist at CISA, said the tactics, techniques and procedures are actually the same -- the real change is in the distribution type and frequency of these attacks. “Regardless of how well we try to prevent cyberattacks, they will always happen, and we have to be ready and able to detect bad things when they happen, or as soon as possible after they happen,” he said. Often, organizations think of cybersecurity as preventing/protecting networks against cyber threats – but that is just one element of the cybersecurity framework, as outlined by the National Institute of Standards and Technology. NIST framework includes five functions, which match the pillars for cyber resiliency: identify, protect, prevent, respond and recover. By dividing cybersecurity into these five stages, agencies can identify cyber actions adversaries might take. It can also help them create a coverage map of the threat landscape to see how their current capabilities can protect, detect and respond to each one of these actual threat actions – and identify where the gaps are. As agencies take a threat-based approach to security, cloud is also playing a large role in resiliency plans.


Microsoft Cloud Security Exec Talks New Tech, WFH, Gamification

From a cloud operator's perspective, Ollmann is seeing the growth of cloud security posture management (CSPM) technologies, which are meant to help security teams bring together their assets and resources in one place to better manage and understand their cloud infrastructure.  "CSPM has been that vehicle for providing visibility of security risk, vulnerabilities, vulnerability management, and then a little bit of gamification to enable and help customers and organizations improve their security posture as they go along," he explained. Security posture management gives infosec teams visibility and control while managing policies. The gamification – a "loose interpretation" of the term, Ollmann noted – is in the score, which informs teams of the risk or security value in a particular asset, resource, application, or environment as a whole. Every vulnerability and poor or absent configuration has a value tied to it. By addressing the weaknesses, a team can increase its overall security score. "Security will never be 100%, so hopefully as you develop these sorts of things, you keep improving on your score," he said. Some larger businesses have multiple apps in multiple environments, and teams compete against one another to boost their numbers.


The resurgence of enterprise architecture

Because enterprise architecture enables a business to map out all their systems and processes and how they connect together, EA is becoming a “very important method and tool to drive forward digital transformation,” said Christ. He explained that since most transformations don’t start off as greenfield projects, about 70% of them fail due to their existing IT landscape. Having a solid baseline, which EA aims to provide, is crucial for any transformation initiative.  “The reason for this is that once you’ve started a transformation program, you discover new dependencies because of applications connected to other systems that you never knew of before. So replacing them with better applications, with newer interfaces, and with better APIs all of a sudden isn’t as easy as you thought when you were starting the transformation program,” he explained.  Businesses also want to understand where their investments in the IT landscape are going, and connect the business strategic goals to the activities in their transformation program. “This is where enterprise architecture can help you. It allows you to look at this whole hierarchy of objectives and programs you are setting up, the affected applications you are having, and the underlying changes in detail,” said Christ.


Quantum Acceleration in 2020

Many frameworks and tools have emerged for developing quantum applications based on these algorithms. Microsoft’s Quantum Development Kit (QDK), for example, provides a tool set integrated with leading development environments, open-source resources, and the company’s high-level programming language, Q#. It also offers access to quantum inspired optimization (QIO) solvers for running optimization problems in the cloud. For building quantum circuits and algorithms that take advantage of quantum processors, IBM offers Qiskit, an open-source quantum computing library for Python. Cirq is yet another quantum programming library created by the team of scientists and engineers at Google. It contains a growing set of functionalities allowing users to manipulate and simulate quantum circuits. Finally, Quil is a quantum programming toolkit from Rigetti that also provides a diverse array of functionalities and data structures for supporting quantum computation. There are also packages, such as Xanadu’s Strawberry Fields and D-Wave's Leap, aimed at quantum backends that are not based on the gate model paradigm. In addition, we see the ongoing creation of domain-specific tools, such as OpenFermion and Xanadu’s PennyLane, purpose-built for running quantum chemistry and quantum machine learning applications, respectively.



Quote for the day:

"The only person you are destined to become is the person you decide to be." -- Ralph Waldo Emerson