Daily Tech Digest - December 12, 2020

E-commerce innovation in 2021 will look like what was projected for 2025

According to McKinsey, over 75% of U.S. consumers have changed shopping behavior and changed to new brands during the COVID-19 pandemic. The top three reasons for shopping for a new brand were value, availability and convenience. The most important filter for discretionary spend is safety. The ability to offer e-commerce, contact-less payments, order online curbside pickup, and home delivery are all requirements in order to compete in the next normal. Salesforce research shows that U.S. retailers offering creative pickup options experienced 29% growth in sales compared to 22% in retailers who had a simple fulfillment option. ... Over the past 5 years we have seen growing investment in social channels as advertising vehicles. In 2021, we will see brands take a step further, adopting commerce capabilities provided by these social platforms. We also anticipate expanding relationships with brands and social influencers as a accelerant to grow sales. This shift will also challenge brands to re-think the traditional definitions of "omni-channel", expanding the definition to include the ability to identify customers, at any location, and the ability to deliver and service their need, independent of time or location, based on the customer's method of delivery.


Create a DevOps culture with open source principles

We can split remote work into fully remote and hybrid working models. A fully remote working model means a DevOps team is geographically dispersed. The members have no desk lying empty back at the office with their name on it. However, COVID-19 restrictions have made every team a fully remote team, at least for the time being. A fully remote team’s benefits include increased agility and playing time zones to the advantage of your delivery cycle. The challenges of a new remote DevOps team run the gamut right now, depending on the level of support their organization had for remote workers pre-COVID. In contrast, a hybrid DevOps team still maintains a presence in a corporate office. Core team members may have permanent seats inside a corporate office. Other team members may work from home or a satellite office full-time or part-time. COVID-19 restrictions add a new factor to hybrid teams because some companies may stagger returns to offices. A hybrid DevOps team’s benefits include having the best of both worlds. Team leadership can still maintain a face in the office. Their developers get the option to work where they’re the most productive. The challenges of a hybrid DevOps team can range from communications to system access issues. 


Understand the IoT Cybersecurity Improvement Act, now law

"Ultimately, the government wants to put together a strategy on how to address IoT devices and what those specific security baseline requirements should be," said Donald Schleede, information security officer at Digi International. To start, the law requires NIST to develop minimum security standards for connected devices that the federal government purchases or uses. It also has the agency develop standards and guidelines for the use and management of all IoT devices that the government owns or uses. It further requires NIST to address secure development, identity management, patching and configuration management as part of its security standards. It prohibits federal entities from buying or using any IoT device determined to be noncompliant with the NIST standards. The legislation requires the Department of Homeland Security to review such measures every five years to determine any necessary revisions. This ensures the federal requirements for connected devices remain current as technology, standards and attack scenarios evolve. The federal law provides more-specific IoT security standards for connected devices than past industry-led attempts and legislative measures have, Schleede said.


New ways Google Workspace works with tools you already use

Creating and collaborating on content is at the heart of getting work done. When working with content received from customers, partners, or teammates, employees shouldn’t lose time converting files or working in unfamiliar tools. With Google Drive, you can store and share over 100 different file types and formats, including Microsoft Word, Excel, and PowerPoint files, as well as PDFs, images, and videos. And by using intelligent features like Priority and Quick Access in Drive, you can find files nearly 50% faster. With Office editing, users can also easily edit Microsoft Office files in Google Docs, Sheets, and Slides without converting them, with the added benefit of layering on Google Workspace’s enhanced collaborative and assistive features. From assigning action items via comment, to writing faster with Smart Compose, to accelerating data entry with Sheets Smart Fill, Office editing brings Google Workspace functionality to your Office files. And we recently extended Office editing to the Docs, Sheets, and Slides mobile apps as well, so you can easily work on Office files on the go.  Starting today, you can also open Office files for editing directly from a Gmail attachment, further simplifying your workflows.


‘Smellicopter’ uses a live moth antenna to hunt for scents

“From a robotics perspective, this is genius,” says coauthor and co-advisor Sawyer Fuller, assistant professor of mechanical engineering. “The classic approach in robotics is to add more sensors, and maybe build a fancy algorithm or use machine learning to estimate wind direction. It turns out, all you need is to add a fin.” Smellicopter doesn’t need any help from the researchers to search for odors. The team created a “cast and surge” protocol for the drone that mimics how moths search for smells. Smellicopter begins its search by moving to the left for a specific distance. If nothing passes a specific smell threshold, Smellicopter then moves to the right for the same distance. Once it detects an odor, it changes its flying pattern to surge toward it. Smellicopter can also avoid obstacles with the help of four infrared sensors that let it measure what’s around it 10 times each second. When something comes within about eight inches (20 centimeters) of the drone, it changes direction by going to the next stage of its cast-and-surge protocol. “So if Smellicopter was casting left and now there’s an obstacle on the left, it’ll switch to casting right,” Anderson says.


Tiny four-bit computers are now all you need to train AI

So what does 4-bit training mean? Well, to start, we have a 4-bit computer, and thus 4 bits of complexity. One way to think about this: every single number we use during the training process has to be one of 16 whole numbers between -8 and 7, because these are the only numbers our computer can represent. That goes for the data points we feed into the neural network, the numbers we use to represent the neural network, and the intermediate numbers we need to store during training. So how do we do this? Let’s first think about the training data. Imagine it’s a whole bunch of black-and-white images. Step one: we need to convert those images into numbers, so the computer can understand them. We do this by representing each pixel in terms of its grayscale value—0 for black, 1 for white, and the decimals between for the shades of gray. Our image is now a list of numbers ranging from 0 to 1. But in 4-bit land, we need it to range from -8 to 7. The trick here is to linearly scale our list of numbers, so 0 becomes -8 and 1 becomes 7, and the decimals map to the integers in the middle.


How can the cloud industry adapt to a post-COVID world?

Technology will play a major part in instigating the changes needed in future, with a key role to play for many of the firms that have enjoyed success during the pandemic. While demand for software such as video conferencing platforms may not be as sky-high as it was at the beginning of the pandemic, Wrenn argues the next big step is how cloud companies can eat further into the market share enjoyed by the traditional telephone industry. “More and more businesses are using Microsoft Teams or Zoom to interact,” he explains, “when previously they would have used conference lines or even called a person directly due to it being more convenient. Cloud providers need to think about how they can make the most of this opportunity as the way in which people interact changes.” To some extent, we should all consider ourselves lucky the global pandemic happened when it did, given that cloud computing has only in recent recently become as advanced as it is now. Thus, rather than ‘profiting from the pandemic’, this period has been the making of the industry. After all, “cloud storage, processing, and compute facilities are already set up, and ready to expand easily and automatically, as and when enterprises need,” according to Royston, who claims this wouldn’t have been the case ten to 15 years go.


Feds: K-12 Cyberattacks Dramatically on the Rise

“Unfortunately, K-12 education institutions are continuously bombarded with ransomware attacks, as cybercriminals are aware they are easy targets because of limited funding and resources,” said James McQuiggan, security awareness advocate at KnowBe4, via email. “The U.S. government is aware of the growing need to protect the schools and has put forth efforts to provide the proper tools for education institutions. A bill has been introduced called the K-12 Cybersecurity Act of 2019, which unfortunately has not been passed yet. This type of action by the government will start the process of protecting school districts from ransomware attacks.” Meanwhile, other malware types are being used in attacks on schools – with ZeuS and Shlayer the most prevalent. ZeuS is a banking trojan targeting Microsoft Windows that’s been around since 2007, while Shlayer is a trojan downloader and dropper for MacOS malware. These are primarily distributed through malicious websites, hijacked domains and malicious advertising posing as a fake Adobe Flash updater, the agencies warned. Social engineering in general is on the rise in the edtech sector, they added, against students, parents, faculty, IT personnel or other individuals involved in distance learning.
The Security Operations Center is an integrated unit dealing with high-quality IT security operations. The primary of a Security Operations Center are to monitor, prevent, detect, investigate, and respond to various cyber threats. SOC teams monitor and protect an organization’s assets like intellectual property, personnel data, business systems, and brand integrity. The SOC team plays an important role in organizations by defending them against incidents and intrusions — regardless of source, time, or the type of attack — through their 24/7 monitoring. ... An increase in the usage of cloud-based solutions across SMEs is the crucial factor driving demand in the global SOC-as-a-Service. The adoption of systems like machine learning, artificial intelligence, and blockchain technologies for cyber defense has further opened new growth avenues in this market. There is an increased demand for Security Operations Center analysts across North America, Europe, the Middle East, Africa, Asia Pacific, and Latin America. Out of these, North America holds a dominant share in this market.


Australian intelligence community seeking to build a top-secret cloud

The project does not involve agencies collecting any new data. Nor does it expand their remit. All existing regulatory arrangements still apply. Rather, the NIC hopes that a community cloud will improve its ability to analyse data and detect threats, as well as improve collaboration and data sharing. "Top Secret" is the highest level in Australia's Protective Security Policy Framework. It represents material which, if released, would have "catastrophic business impact" or cause "exceptionally grave damage to the national interest, organisations or individuals". Until very recently the only major cloud vendor to handle top secret data, at least to the equivalent standards of the US government, was Amazon Web Services (AWS). AWS in 2017 went live with an AWS Secret Region targeted towards the US intelligence community, including the CIA, and other government agencies working with secret-level datasets.  In Australia, AWS was certified to the protected level, two classification levels down from top secret. The "protected" certification came via the ASD's Certified Cloud Services List (CCSL), which was in June shuttered, leaving certifications gained through the CCSL process void.



Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree

Daily Tech Digest - December 11, 2020

5 signs your agile development process must change

Agile teams figure out fairly quickly that polluting a backlog with every idea, request, or technical issue makes it difficult for the product owner, scrum master, and team to work efficiently. If teams maintain a large backlog in their agile tools, they should use labels or tags to filter the near-term versus longer-term priorities. An even greater challenge is when teams adopt just-in-time planning and prioritize, write, review, and estimate user stories during the leading days to sprint start. It’s far more difficult to develop a shared understanding of the requirements under time pressure. Teams are less likely to consider architecture, operations, technical standards, and other best practices when there isn’t sufficient time dedicated to planning. What’s worse is that it’s hard to accommodate downstream business processes, such as training and change management if business stakeholders don’t know the target deliverables or medium-term roadmap. There are several best practices to plan backlogs, including continuous agile planning, Program Implement planning, and other quarterly planning practices. These practices help multiple agile teams brainstorm epics, break down features, confirm dependencies, and prioritize user story writing.


How to Align DevOps with Your PaaS Strategy

Some organizations are adopting a multi-PaaS strategy which typically takes the form of developing an application on one PaaS and deploying it to multiple public clouds. However, not all PaaS provide that capability. One reason to deploy to multiple clouds is increase application reliability. Despite SLAs, outages may occur from time to time. Alternatively, different applications may require the use of different PaaS because the PaaS services vary from vendor to vendor. However, more vendors mean more complexity to manage. "Tomorrow, your business transaction is going to be going over SaaS services provided by multiple vendors so I might have to orchestrate across multiple clouds, multiple vendors to complete my business transaction," said Chennapragada. "Tying myself [to] a vendor is going to constrain me from orchestrating, so our clients are thinking of a more cloud-agnostic, vendor-agnostic solution." One of the general concerns some organizations have is whether they have the expertise to manage everything themselves, which has led to a huge proliferation of managed service providers. That way, DevOps teams have more time to focus on product development and delivery. PaaS expertise can be difficult to find because PaaS skills are niche skills. 


Low Code: CIOs Talk Challenges and Potential

CIO viewpoints honestly differed. For example, CIO Milos Topic suggests “it is still early in experimentation in our environment, but it is mostly useful in automating and provisioning repetitive processes and modules. But it is essential to stress that low code doesn't mean hands off.” Meanwhile, CIO David Seidl says “the adoption is big because of the ability to make more responsive changes. The trade-off is interesting. The open question is: can you remove one of the cost layers (maintaining code) and trade it for business logic and platform maintenance? And how do you minimize platform maintenance and could cloud services help. The big question is: do we consider business logic code? It can be just as complex to build and debug complex business logic in a drag and drop as traditional code. So, you win on the UI/layout/integration components, but core code remains an open question.” However, CIO Deb Gildersleeve suggests that low code gives business users without technical coding expertise the tools to solve their problems. It takes the burden outside of IT but can be provided with guardrails for security governance.”


Security Think Tank: Integration between SIEM/SOAR is critical

Security operations teams will have a playbook which details the decisions and actions to be taken from detection to containment. This may suggest actions to be taken on detection of a suspicious event through escalation and possible responses. SOAR can automate this, taking autonomous decisions that support the investigation, drawing in threat intelligence and presenting the results to the analyst with recommendations for further action. The analyst can then select the appropriate action, which would be carried out automatically, or the whole process can be automated. For example, the detection of a possible command and control transmission could be followed up in accordance with the playbook to gather relevant threat intelligence and information on which hosts are involved and other related transmissions. The analyst would then be notified and given the option to block the transmissions and isolate the hosts involved. Once selected, the actions would be carried out automatically. Throughout the process, ticketing and collaboration tools would keep the team and relevant stakeholders informed and generate reports as required.


Low-Code To Become Commonplace in 2021

The citizen developer concept has been gathering marketing steam, but it might not be just hype. Now, data suggests low-code tools are actually opening doors for such non-developers. Seventy percent of companies said non-developers in their company already build tools for internal business use, and nearly 80% predict to see more of this trend in 2021. It should be noted that low-code and no-code do not seek to replace all engineering talent; instead, to free them up to engage in more complex tasks. “With low-code, you free up your engineers to work on harder problems, instead of having them work on basic things,” said Arisa Amano, CEO of Internal. She believes this could translate into more innovation companywide. Surprisingly, bringing non-traditional engineers into the development fold is not being met with ambivalence—69.2% of respondents foresee that citizen developers positively affect engineering teams, with the rest primarily exhibiting a neutral reaction. The costs of internal security threats are high. Breaches could decrease customer trust, harm brand reputation and lead to escalating legal fees. With cyberattacks a prevalent concern, cybersecurity must come back in style.


People want data privacy but don’t always know what they’re getting

In practice, differential privacy isn’t perfect. The randomization process must be calibrated carefully. Too much randomness will make the summary statistics inaccurate. Too little will leave people vulnerable to being identified. Also, if the randomization takes place after everyone’s unaltered data has been collected, as is common in some versions of differential privacy, hackers may still be able to get at the original data. When differential privacy was developed in 2006, it was mostly regarded as a theoretically interesting tool. In 2014, Google became the first company to start publicly using differential privacy for data collection. Since then, new systems using differential privacy have been deployed by Microsoft, Google and the U.S. Census Bureau. Apple uses it to power machine learning algorithms without needing to see your data, and Uber turned to it to make sure their internal data analysts can’t abuse their power. Differential privacy is often hailed as the solution to the online advertising industry’s privacy issues by allowing advertisers to learn how people respond to their ads without tracking individuals. But it’s not clear that people who are weighing whether to share their data have clear expectations about, or understand, differential privacy.


Widespread malware campaign seeks to silently inject ads into search results

The malware makes changes to certain browser extensions. On Google Chrome, the malware typically modifies “Chrome Media Router”, one of the browser’s default extensions, but we have seen it use different extensions. Each extension on Chromium-based browsers has a unique 32-character ID that users can use to locate the extension on machines or on the Chrome Web store. On Microsoft Edge and Yandex Browser, it uses IDs of legitimate extensions, such as “Radioplayer” to masquerade as legitimate. As it is rare for most of these extensions to be already installed on devices, it creates a new folder with this extension ID and stores malicious components in this folder. On Firefox, it appends a folder with a Globally Unique Identifier (GUID) to the browser extension. ... Despite targeting different extensions on each browser, the malware adds the same malicious scripts to these extensions. In some cases, the malware modifies the default extension by adding seven JavaScript files and one manifest.json file to the target extension’s file path. In other cases, it creates a new folder with the same malicious components. These malicious scripts connect to the attacker’s server to fetch additional scripts, which are responsible for injecting advertisements into search results.


Penetration Testing: A Road Map for Improving Outcomes

Traditional penetration testing is a core element of many organizations' cybersecurity efforts because it provides a reliable measurement of the organization's security and defense measures. However, because a client can classify assets as out of scope, the pen test may not give an accurate read on the organization's full security posture. Because the pen-testing approach, authorization process, and testing ranges are defined in advance, these assessments may not measure an organization's true ability to identify and act on suspicious activities and traffic. Ultimately, placing restrictions on a test's scope or duration can harm the tested organization. In the real world, neither time nor scope are of any consideration to attackers, meaning the results of such a test are not entirely reliable. Incorporating objective-oriented penetration testing can improve typical pen-testing systems and, in turn, enhance an organization's security posture and incident response, as well as limit their risk of exposure. The first step is to agree on attackers' likely objectives and a reasonable time frame. For example, consider ways attackers could access and compromise customer data or gain access to a high-security network or physical location. 


Facial recognition's fate could be decided in 2021

Several lawsuits filed in 2020 that could see resolution next year may also have an impact on facial recognition. Clearview AI is facing multiple lawsuits about its data collection. The company collected billions of public images from social networks including YouTube, Facebook and Twitter. All of those companies have sent a cease-and-desist letter to Clearview AI, but the company maintains that it has a First Amendment right to take these images. That argument is being challenged by Vermont's attorney general, the American Civil Liberties Union and two lawsuits in Illinois. Clearview AI didn't respond to requests for comment. The Clearview decision could play a role in facial recognition's future. The industry relies on hordes of images of people, which it gets in many ways. An NBC News report in 2019 called it a "dirty little secret" that millions of photos online have been getting collected without people's permission to train facial recognition algorithms. "We're likely to also see growing amounts of litigation against schools, businesses and other public accommodations under a new wave of biometric privacy laws, including New York City's forthcoming ban on commercial biometric surveillance," said the Surveillance Technology Oversight Project's Cahn.


Hacking Group Dropping Malware Via Facebook, Cloud Services

While the newly discovered DropBook backdoor uses fake Facebook accounts for its command-and-control operations, the report notes that both SharpStage and DropBook utilize Dropbox to exfiltrate the data stolen from their targets, as well as for storing espionage tools, according to the report. Once a device is compromised, the SharpStage backdoor can capture screenshots, check for Arabic language presence in the victims' device for precision targeting and download and execute additional components. DropBook, on the other hand, is used for reconnaissance and to deploy shell commands, the report notes. The attackers use MoleNet to collect system information from the compromised devices, communicate with the command-and-control servers and maintain persistence, according to the report. Besides the new backdoor components, researchers note the hackers deployed an open-source remote access Trojan called Quasar, which was previously linked to a Molerats campaign in 2017. Cybereason researchers note that once the DropBook malware is in the victims' devices, it begins its operation by fetching a token from a post on a fake Facebook account.



Quote for the day:

"Example has more followers than reason. We unconsciously imitate what pleases us, and approximate to the characters we most admire." -- Christian Nestell Bovee

Daily Tech Digest - December 10, 2020

Hackers hide web skimmer inside a website's CSS files

Places where web skimmers have been found in the past include inside images such as those used for site logos, favicons, and social media networks; appended to popular JavaScript libraries like jQuery, Modernizr, and Google Tag Manager; or hidden inside site widgets like live chat windows. The latest of these odd places is, believe it or not, CSS files. Standing for cascading style sheets, CSS files are used inside browsers to load rules for stylizing a web page's elements with the help of the CSS language. These files usually contain code describing the colors of various page elements, the size of the text, padding between various elements, font settings, and more. However, CSS is not what it was in the early 2000s. Over the past decade, the CSS language has grown into an incredibly powerful utility that web developers are now using to create powerful animations with little to no JavaScript. One of the recent additions to the CSS language was a feature that would allow it to load and run JavaScript code from within a CSS rule. Willem de Groot, the founder of Dutch security firm Sanguine Security (SanSec), told ZDNet today that this CSS feature is now being abused by web skimmer gangs.


The Line Between Physical Security & Cybersecurity Blurs as World Gets More Digital

For manufacturers, the importance of forcing users to change default credentials before first use has never been higher. The Mirai botnet, one of the most well-known and successful pieces of malware in history, infected millions of connected devices across the globe by exploiting common default username/password combinations. While manufacturers have been aware of the importance of changing default passwords, we are now seeing mechanisms being put in place to ensure a device doesn't function until the password is changed. Going even further, some states, including California, have reinforced that knowledge with legislation mandating their use. Similarly, integrators must be able to keep devices protected during and after the installation process, avoiding the sort of misconfigurations that cyberattackers are known to exploit. IT departments and users themselves also bear a degree of responsibility when it comes to securing their devices by installing product updates and patches in a timely manner. Organizations must ensure that their employees understand the importance of protecting every device on the network, while also effectively vetting the security knowledge and capabilities of both their manufacturer and integrator partners.


5 big and powerful Python web frameworks

At its core, CubicWeb provides basic scaffolding used by every web app: a “repository” for data connections and storage; a “web engine” for basic HTTP request/response and CRUD actions; and a schema for modeling data. All of this is described in Python class definitions. To set up and manage instances of CubicWeb, you work with a command-line tool similar to the one used for Django. A built-in templating system lets you programmatically generate HTML output. You can also use a cube that provides tools for web UIs, such as that for the Bootstrap HTML framework. Although CubicWeb supports Python 3 (since version 3.23), it does not appear to use Python 3’s native async functionality. ... Django has sane and safe defaults that help shield your web application from attack. When you place a variable in a page template, such as a string with HTML or JavaScript, the contents are not rendered literally unless you explicitly designate the instance of the variable as safe. This by itself eliminates many common cross-site scripting issues. If you want to perform form validation, you can use everything from simple CSRF protection to full-blown field-by-field validation mechanisms that return detailed error feedback.


Myth vs. reality: a practical perspective on quantum computing

Developers and researchers want to ensure they invest in languages and tools that will adapt to the capabilities of more powerful quantum systems in the future. Microsoft’s open-source Quantum Intermediate Representation (QIR) and the Q# programming language provide developers with a flexible foundation that protects their development investments. QIR is a new Microsoft-developed intermediate representation for quantum programs that is hardware and language agnostic, so it can be a common interface between many languages and target quantum computation platforms. Based on the popular open-source LLVM intermediate language, QIR is designed to enable the development of a broad and flexible ecosystem of software tools for quantum development. As quantum computing capabilities evolve, we expect large-scale quantum applications will take full advantage of both classical and quantum computing resources working together. QIR provides full capabilities for describing rich classical computation fully integrated with quantum computation. It’s a key layer in achieving a scaled quantum system that can be programmed and controlled for general algorithms.


A newly-described 'blockchain denial of service' attack could convince miners to stop minin

The attack works by targeting the system’s reward system in a way that discourages miner participation. Specifically, the attacker publishes a proof to the blockchain that signals to other miners that the attacker holds a mining advantage. The researchers found that what they define as “rational” miners will stop mining if they detect that they are at a disadvantage. “If the profitability decrease is significant enough so that all miners stop mining, the attacker can stop mining too,” they write. “The blockchain thus grinds to a complete halt.” The study authors add: “We find that Bitcoin’s vulnerability to BDoS increases rapidly as the mining industry matures and profitability drops.” According to Ittay Eyal, a senior lecturer at Technion who co-authored the study, BDoS attacks are different from a type of attack called selfish mining, in which the attacker manipulates the system to get more than their fair share of rewards. In a BDoS attack, the attacker’s aim is to take down a proof-of-work cryptocurrency rather than reap rewards. Eyal said the findings of the study pertain specifically to Bitcoin, but that’ it’s likely there are similar attacks against Ethereum. The researchers have not gathered any concrete results on this yet, he said.


Zscaler CEO: Network Security Is Dead. Long Live SASE

The security vendor started as a secure web gateway provider before adding firewall and zero-trust network access. It then it added out-of-band cloud access security broker (CASB) capabilities to its platform, all of which positioned it perfectly to dive into SASE when Garter coined the term last year. Earlier this year, Zscaler also acquired Edgewise Networks to add that company’s zero-trust networking and application microsegmentation technologies to its platform, which also give it a SASE boost. SASE, according to Gartner, consolidates networking and security capabilities into an edge cloud-delivered service. While Zscaler arguably provides a best-of-breed SASE security stack, it doesn’t own a networking piece. Instead, Zscaler partners with all of the SD-WAN vendors including VMware, and, in fact, VMware CEO Pat Gelsinger joined Chaudhry for a video appearance during the virtual keynote to tout the two companies’ SASE partnership. When asked if Zscaler plans to continue partnering with SD-WAN vendors to provide a full SASE architecture or acquire SD-WAN to provide its own networking capabilities, Chaudhry said there’s no reason for Zscaler to provide SD-WAN. “We believe that the notion that SASE means networking and security coming together is a misinterpretation of it,” the exec said.


Soft PLCs: The industrial innovator’s dilemma

Industrial control has come a long way from being bulky, maintenance heavy relay-based systems in the 1960s to today’s high-speed processor-based programmable logic controllers (PLCs). What began as a basic attempt to replace relay control quickly transformed as the foundation of modern industrial control and automation.  The introduction of Windows in 1985 spawned the first wave of soft PLCs which manifested themselves in PC-based control systems. The engineering community quickly saw the benefits of combining PLC control and HMI in one box – the PC. Several Windows-based control systems emerged in the 1990’s (e.g. ASAP, Think and Do, Steeplechase Software and Wonderware), but none managed to gain sustained traction in the marketplace. “Blue screens of death” raised questions about the reliability of these systems, and the lack of virtualization / containerization technologies made it difficult to efficiently run multiple workloads (e.g. HMI and control) on a single box. Fast forward to 2020, and the value proposition of PC-based control is much stronger than it was in the 90’s as the maturation of Linux operating systems, virtualization technologies and low-cost edge computing hardware have addressed many of the early issues that plagued the first wave of PC-based control systems.


As Ransomware Booms, Are Cyber Insurers Getting Cold Feet?

Constant innovation is one factor, as ransomware operations have continued to refine their business strategies, including exfiltrating and leaking stolen data, using affiliate programs to boost their reach, and even hiring call centers to run boiler-room operations to pressure victims to pay. In Q3, the average ransom payment - when a victim paid - was $233,817, which was an increase of 31% from the previous quarter, reports ransomware incident response firm Coveware. Gangs' successes carry an obvious cost for victims who pay; their criminal profits put a drain on someone else's budget. When victims do pay a ransom, some will remit it entirely from their own coffers. But many organizations now carry cyber insurance with ransomware or extortion protection. As ransomware payouts have risen, however, insurance providers' profits have been taking a dive. Accordingly, some insurers now appear to be "attempting to shelter themselves from these losses, either by excluding extortion events from standard cyber insurance coverage or by introducing onerous new conditions on policyholders," the Seriously Risky Business newsletter reported last week. Experts across the security and insurance industries say that, with ransomware racking up record profits, there's little chance of it abating anytime soon.


Agile is changing software development. Here's how one company made the switch

At Capital One, Soule has helped the bank move away from legacy ways of working and towards an investment in software engineering capability and Agile methodologies. It's a long-term rebalancing act that has seen the company adopt close-knit development teams with clear and concise deliverables. "Changing little and often is now a reality for this organisation," he says. "That change is the mark of the difference between large, monolithic Waterfall delivery of implementations to open-source software, delivered incrementally in feature form on existing products. We've converted most of our IT spending on assets into people. That's been a stellar story." Back in 2014, there were 30 engineers – most of them infrastructure engineers – working for Capital One Europe. Today, there's as many as 300 engineers in the UK business alone. The vast majority are software engineers, compared to just a few six years ago. Soule says this transformation to Agile working has had a "game-changing" impact on the delivery of applications to customers. In the old Waterfall-based way of working, systems and services would take years of effort and millions of pounds to create. These big projects, says Soule, consumed resources and meant other interesting innovations fell by the wayside: "Often other things didn't get done because all the focus of the development engine was on that one big thing."


Why DSLs? A Collection of Anecdotes

Domain-specific languages rely on a different approach. They allow the domain expert to specify the behavior of the software directly. The transformation from unstructured thought to executable specification happens in their brains. The executable specifications - or models - created this way are then automatically transformed into "real" source code by machinery developed by software engineers. Does this really work? It does under certain conditions. In particular, the language must be suitable for use by non-programmers. The primitives in the language should not be generic to "computation" - such as variables, conditions, loops, functions, monads or classes - but instead be specific to the domain, and therefore meaningful to the user: decision table, treatment step, tax rule or satellite telemetry message definition. The syntax should build on existing notations and conventions used in the domain - tables, symbols, diagrams and text - and not just consist of magenta-colored keywords and curly braces. DSLs are also usually less flexible in the sense that users can only compose new abstractions in very limited ways; while this would be a problem for general-purpose languages, it is a plus for DSLs because it ensures that programs are less complicated and easier for tools to analyze and provide IDE support for.



Quote for the day:

"Successful leadership requires positive self-regard fused with optimism about a desired outcome." -- Warren Bennis

Daily Tech Digest - December 09, 2020

The commodification of customer data privacy

B2B customers want personalized experiences, too. Aside from the data they might input into a contact form; B2B buyers put plenty of data online for the world to see. You can build a B2B buyer profiles just by gleaning data from their LinkedIn profile and their interactions online. Software exists that enable businesses to automate the process by scraping data from public sources. But it needs to be clear that this information is being collected and stored in good faith. Businesses should limit the amount of data they collect from customers, only using the data essential to their operations. Customers should always be made aware of what data is being collected, why, and how it will be used. This information should be easy to find and understand, not obfuscated by legal jargon and fine print. Some good examples of this are the “cookie” statements businesses place on their websites under the EU’s General Data Protection Regulation (GDPR). Finally, data must be stored in a secure environment, then erased when it is no longer being used. The customer should be made aware of what policies and protections are in place regarding the use of their data.


Unethical AI unfairly impacts protected classes - and everybody else as well

Why is ethics so important now with AI? Wherever there is a social context, anything involving people, ethical questions are necessary because it becomes personal. Before big data and data science, researchers categorized people into cohorts, or categories, such as tofu lovers with a college degree, or evangelical Christians. There wasn't enough data available at the individual level to draw inference on a single person. Even when evaluating a single person for credit or life insurance, the few available characteristics were used to compare with a larger group. What is different today is an avalanche intimate, personal detail, exacerbated by a shift in sources, from interval "operational exhaust" to a myriad of external, non-traditional data, such as pictures and videos that are not even vetted. In the wrong hands, with the wrong model, it can wreak havoc to people's lives. The capability to produce errant models and inferences and put them in production at a scale that is orders of magnitude greater than anything before compounds the potential adverse outcomes. Today, your "digital footprint," information about you on the internet, is so enormous that it is estimated the growth of your personal data on the internet is two megabytes per second.


Using deep learning to infer the socioeconomic status of people in different urban areas

Researchers at the Ecole Normale Superieure (ENS) de Lyon and Central European University (CEU) have recently developed a deep neural network that could be used to study the socioeconomic inequalities that can arise from urbanization. Their study, featured in Nature Machine Intelligence, confirms the potential of convolutional neural networks (CNNs) for the in-depth analysis of geographical regions. For many years, efficiently tracking urbanization, the process through which an urban area becomes increasingly large and populated, has proved fairly challenging. The development of increasingly advanced remote sensing and satellite technologies, however, opened up new exciting possibilities for the observation of specific geographical regions and consequently for urbanization-related research. In their study, the researchers ENS Lyon and CEU tried to use deep learning algorithms to analyze the images collected by these tools. "Our initial goal was actually to check what was the finest spatial resolution that we could get our algorithm (i.e., predicting the average income of an area based on its satellite image) to work with," Jacob Levy Abitbol and Marton Karsai, the researchers who carried out the study, told TechXplore.


Digital transformation: 4 ways to help IT teams adapt to disruption

Prioritize user adoption and buy-in. That includes understanding generational and workstyle differences of various users and establishing clear metrics around adoption, usage, and engagement. Analyzing the depth of communication and relationships that result from the collaborations will reduce communication gaps and breakdowns and provide a clear indication that the collaboration is working. ... IT leaders aiming for digital success must better identify future skills requirements, push for increased investment and uptake in skills acquisition, improve access to quality training to support future skills, and create an agile skills development system that can adapt to market needs to fuel a culture of lifelong learning. Sometimes those answers can come from within. ... This tells us we need a different kind of leadership, one in which leaders inspire rather than require. ... Adaptive design allows the transformation strategy and resource allocation to adjust over time. That includes flexible talent allocation, a key differentiator in a transformation’s success, and ensuring resources are earmarked for initiatives that span organizational silos. It’s also important to practice the art of simplicity by valuing what works well enough and accepting solutions that satisfy business needs – you can enhance a simple solution later on.


FireEye, a Top Cybersecurity Firm, Says It Was Hacked by a Nation-State

The F.B.I. on Tuesday confirmed that the hack was the work of a state, but it also would not say which one. Matt Gorham, assistant director of the F.B.I. Cyber Division, said, “The F.B.I. is investigating the incident and preliminary indications show an actor with a high level of sophistication consistent with a nation-state.” The hack raises the possibility that Russian intelligence agencies saw an advantage in mounting the attack while American attention — including FireEye’s — was focused on securing the presidential election system. At a moment that the nation’s public and private intelligence systems were seeking out breaches of voter registration systems or voting machines, it may have a been a good time for those Russian agencies, which were involved in the 2016 election breaches, to turn their sights on other targets. The hack was the biggest known theft of cybersecurity tools since those of the National Security Agency were purloined in 2016 by a still-unidentified group that calls itself the ShadowBrokers. That group dumped the N.S.A.’s hacking tools online over several months, handing nation-states and hackers the “keys to the digital kingdom,” as one former N.S.A. operator put it.


Dealing with Remote Team Challenges

Most of us are social creatures who enjoy the company of others. The concept of coming together to solve a common goal isn’t necessarily displaced by the concept of remote or distributed, but it can be trickier. There are opportunities for asynchronous communication, increased productivity through "flow" or uninterrupted time, and reduced travel and asset management costs. On the other hand, there are the challenges of equitable access, ensuring adequate resources and tooling as well as the need to address social isolation and the issue of trust. What seems to be happening more and more though is the shift away from a hierarchical structure to a more neural one with teams becoming smaller, more agile and cross-functional, as suggested by the May 2020 McKinsey Report. Mullenweg’s five stages of remote working suggest that those teams that have moved beyond trying to replicate the office model to be remote-first and truly asynchronous are edging closer to Nirvana, a state where distributed teams would consistently perform better than any in-person team. At this point, the creativity, energy, health and productivity of the team are at their peak with individuals performing at their highest level.


CIO interview: John Davison, First Central Group

“Intelligent automation means so much more for us than an efficiency tool,” says Davison. “We are building an entirely new technical competency into our business, so that it becomes part of our DNA. This not only changes operational execution but, importantly, changes the management mindset about the art of the possible and strategic decision-making.” The automated renewal process is another area where Blue Prism has been deployed. With the support of Blue Prism’s partner, IT and automation consultancy T-Tech, the First Central team can check for accuracy the issue of more than 3,000 renewal invitations daily in just two hours. The new process verifies each renewal notice, removing the need for costly, time-intensive manual work downstream to correct anomalies and reduce the risk of a regulatory incident.  Along with driving operational efficiencies, Davison believes RPA also boosts business confidence. “Risk mitigation is a lot more intangible, but can measure the cost of distraction and can measure our effectiveness from a robotics perspective,” he says. Davison’s team has established a robotics capability for the business capability. “It is not my job to close down operational risk,” he says.


The best programming language to learn now

The typed-language lovers are smart and they write good code, but if you think your code is good enough to run smoothly without the extra information about the data types for each variable, well, Python is ready for you. The computer can figure out the type of the data when you store it in a variable. Why make extra work for yourself? Note that this freewheeling approach may be changing, albeit slowly. The Python documentation announces that the Python runtime does not enforce function and variable type annotations but they can still be used. Perhaps in time adding types will become the dominant way to program in the language, but for now it’s all your choice. ... If you’re writing software to work with data, there’s a good chance you’ll want to use Python. The simple syntax has hooked many scientists, and the language has found a strong following in the labs around the country. Now that data science is taking hold in all layers of the business world, Python is following. One of the best inventions for creating and sharing interactive documents, the Jupyter Notebook, began with the Python community before embracing other languages.


Millions of IoT Devices at Risk From TCP/IP Stack Flaws

The research is a continuation of Forescout's exploration of TCP/IP stacks. In June, Forescout revealed the so-called Ripple20 flaws in a single but widely used TCP/IP stack made by an Ohio-based company, Treck. This time around, Forescout broadened its research into more types of TCP/IP stacks. The stacks enable basic network communication. Software developers don't develop their own but instead use off-the-shelf open-source stacks in their products or forks of those projects. "We discovered...33 vulnerabilities in four of seven [TCP/IP] stacks that we analyzed," Costante says. The flaws were found in uIP, FNET, PicoTCP and Nut/Net. Forescout also examined IwIP, CycloneTCP and uC/TCP-IP but didn't find any of the most common coding errors. But Forescout says it doesn't mean those TCP/IP stacks are necessarily free of problems. Many of the issues are centered around Domain Name System functionality. "We find that the DNS, TCP and IP sub-stacks are the most often vulnerable," Forescout says in its report. "DNS, in particular, seems to be vulnerable because of its complexity." Brad Ree, who is CTO of the consultancy ioXt and board member at the ioXt Alliance, says it is concerning to see the IPv6 vulnerabilities in Forescout's findings.


How Kali Linux creators plan to handle the future of penetration testing

The Kali Linux distribution, designed specifically for penetration testing and digital forensics, is still offered free of charge. Under her leadership OffSec has formed a dedicated Kali team and made quarterly releases since January 2019, which has received positive reviews from the community. “Kali and other projects like Exploit Database, the largest collection of exploits and vulnerabilities on the internet, keep us uniquely in tune with the needs of the security community and continue to inform our company direction,” she explained. But the thing she’s most proud of is that OffSec has become a company with a clear set of well-defined core company values: family, passion, integrity, community and innovation. “We live by these values as we scale, hire and operate. As a CEO, I found my own style through the support of our team members: have the courage to be authentic and vulnerable. We have cultivated an environment to embrace and practice a growth mindset, build vulnerability-based trust, and empower and enable our team to do their best. My job as CEO is about how to make our people happier in ways I or OffSec can influence.”



Quote for the day:

"Success consists of going from failure to failure without loss of enthusiasm." -- Winston Churchill

Daily Tech Digest - December 08, 2020

Cloud, containers, AI and RPA will spur a strong tech spending rebound in 2021

Not surprisingly, the ability to work remotely has been a critical factor. Forty-four percent of respondents cited Business Continuity Plans as a key factor. Several customers have told us, however, that their business continuity plans were far too focused on disaster recovery and as such they made tactical investments to shore up their digital capabilities. C-suite backing and budget flexibility were cited as major factors. We see this as a real positive in that the corner office and boards of directors are tuned into digital. They understand the importance of getting digital “right” and we believe that they now have good data from the past 10 months on which investments will yield the highest payback. As such, we expect further funding toward digital initiatives. Balance sheets are strong for many companies as several have tapped corporate debt and taken advantage of the low interest rate climate. Twenty-seven percent cited the use of emerging technologies as a factor. Some of these, it could be argued, fall into the first category – working remotely. The bottom line is we believe that the 10-month proof of concept that came from COVID puts organizations in a position to act quickly in 2021 to accelerate their digital transformations further by filling gaps and identifying initiatives that will bring competitive advantage.


Digital transformation teams in 2021: 9 key roles

“Data analytics is a good place to start with any transformation, to make sound decisions and design the proper solutions,” says Carol Lynn Thistle, managing director at CIO executive recruiting firm Heller Search Associates. One foundational IT position is the enterprise data architect or (in some cases) a chief data officer. These highly skilled professionals can look at blueprints, align IT tooling with information assets, and connect to the business strategy, Thistle explains. ... “Digital transformation is about automation of business processes using relevant technologies such AI, machine learning, robotics, and distributed ledger,” says Fay Arjomandi, founder and CEO of mimik Technology, a cloud-edge platform provider. “Individuals with business knowledge that can define the business process in excruciating detail. This is an important role, and we see a huge shortage in the market.” ... “[Organizations need] a digitally savvy person at the CXO level who will help other executives buy into the culture change that will be required to truly transform the organization into one that is digital-first,” says Mike Buob, vice president of customer experience and innovation for Sogeti, the technology and engineering services division of Capgemini.


Quantum Computing Marks New Breakthrough, Is 100 Trillion Times More Efficient

Jiuzhang, as the supercomputer is called, has outperformed Google’s supercomputer, which the company had claimed last year to have achieved quantum computing supremacy. The supercomputer by Google named Sycamore is a 54-qubit processor, consisting of high-fidelity quantum logic gates that could perform the target computation in 200 seconds. The researchers explored Boson sampling, a task considered to be a strong candidate to demonstrate quantum computational advantage. As the researcher cited in the research paper, they performed Gaussian boson sampling (GBS), which is a new paradigm of boson sampling, one of the first feasible protocols for quantum computational advantage. In boson sampling and its variants, nonclassical light is injected into a linear optical network, which generates highly random photon-number, measured by single-photon detectors. Researchers sent 50 indistinguishable single-mode squeezed states into a 100-mode ultralow-loss interferometer with full connectivity and random matrix. They further shared that the whole optical setup is phase-locked and that the sampling of output was done using 100 high-efficiency single-photon detectors.


Why Edge Computing Matters in IoT

The Edge basically means “not Cloud” because what constitutes the Edge can differ depending on the application. To explain, let’s look at an example. In a hospital, you might want to know the location of all medical assets (e.g., IV pumps, EKG machines, etc.) and use a Bluetooth indoor tracking IoT solution. The solution has Bluetooth Tags, which you attach to the assets you want to track (e.g., an IV pump). You also have Bluetooth Hubs, one in each room, that listens for signals from the Tags to determine which room each Tag is in (and therefore what room the asset is in). In this scenario, both the Tags and the Hubs could be considered the “Edge.” The Tags could perform some simple calculations and only send data to the Hubs if there’s a large sensory data change. ... One of the issues with the term ”IoT” is how broadly it’s defined. Autonomous vehicles that cost tens of thousands of dollars collect Terabytes of data and use 4G cellular networks are considered IoT. At the same time, sensors that cost a couple of dollars collect just bytes of data and use Low-Power Wide-Area Networks (LPWANS) are also considered IoT. The problem is that everyone is focusing on high bandwidth IoT applications like autonomous vehicles, the smart home, and security cameras. 


Could AI become dangerous?

When asked about the dangers of AI, Arman asserted that ‘danger has always existed in every technological innovation in history, from the ever-increasing trail of pollution caused by the first Industrial Revolution to the idea of Nuclear power generation to free use of pesticides everywhere into genetic modification of food and so on.’ AI is only a part of that as ‘it is on its path to outgrow human’s capacity to fully understand how it makes decisions and what is the base of its outcomes.’ Indeed, this would be the first time that our intellectual superiority would be taken away. To shed some light on this, Arman retells a conversation he had with one AI lead from key players in Silicon Valley during a meeting in 2017: ‘After 2 hours of discussing, brainstorming and trying to picture a path, we ended up having no firm idea on where AI was leading us. The final outcome was that each individually announced that they believe it is too early to predict anything and we can’t even say with certainty where we will be in 18 months. They also refused to acknowledge the risk that was brought up through research from my team projecting that – back in 2017, even with AI still being in its infancy – it had the ability to take away over 1 billion jobs across the globe.


What’s New on F#: Q&A With Phillip Carter

FP and Object-Oriented Programming (OOP) aren’t really at odds with each other, at least not if you use each as if they were a tool rather than a lifestyle. In FP, you generally try to cleanly separate your data definitions from functionality that operates on. In OOP, you’re encouraged to combine them and blur the differences between them. Both can be incredibly helpful depending on what you’re doing. For example, in the F# language we encourage the use of objects to encapsulate data and expose functionality conveniently. That’s a far cry from encouraging people to model everything using inheritance hierarchies, and at the end of the day you still tend to work with an object in a functional way, by calling methods or properties that just produce outputs. Both styles can work well together if you don’t “all in” on one approach or the other. ... What’s interesting is that even though F# runs on .NET, which often has an “enterprisey” kind of reputation, F# itself doesn’t really suffer the negative aspects of that kind of reputation. It can be used for enterprise work, but it’s usually seen as lightweight and its community is engaged and available as opposed to stuck behind a corporate firewall.


3 questions to ask before adopting microservice architecture

Teams may take different routes to arrive at a microservice architecture, but they tend to face a common set of challenges once they get there. John Laban, CEO and co-founder of OpsLevel, which helps teams build and manage microservices told us that “with a distributed or microservices based architecture your teams benefit from being able to move independently from each other, but there are some gotchas to look out for.” Indeed, the linked O’Reilly chart shows how the top 10 challenges organizations face when adopting microservices are shared by 25%+ of respondents. While we discussed some of the adoption blockers above, feedback from our interviews highlighted issues around managing complexity. The lack of a coherent definition for a service can cause teams to generate unnecessary overhead by creating too many similar services or spreading related services across different groups. One company we spoke with went down the path of decomposing their monolith and took it too far. Their service definitions were too narrow, and by the time decomposition was complete, they were left with 4,000+ microservices to manage. They then had to backtrack and consolidate down to a more manageable number.


IT careers: 10 critical skills to master in 2021

The key to adaptability, virtual collaboration, and digital transformation (and agile) is distributed leadership and self-managed teams. This requires that everyone have core leadership skills, and not just people in the positions of managers and above. For the past 11 years, I’ve been training and coaching IT professionals at every job level – from individual contributors up to CIOs – in what I believe are the six key core leadership skills that every IT professional needs to master, even more so today than at any time in the past. ... "Yes, IT professionals need to know the underpinnings of technology and tech trends. But what many fail to realize is how heavily IT leaders rely on effective communication skills to do their jobs successfully. As CIO of ServiceNow, my role demands clear, consistent communication – both within my organization and across other functions – to make sure that everyone is aligned on the right outcomes. Communication is the key to digital transformation and IT professionals need to communicate with employees across departments on what this means for their work.” - Chris Bedi, CIO, ServiceNow


How to industrialize data science to attain mastery of repeatable intelligence delivery

As you look at the amount of productive time data scientists spend creating value, that can be pretty small compared to their non-productive time — and that’s a concern. Part of the non-productive time, of course, has been with those data scientists having to discover a model and optimize it. Then they would do the steps to operationalize it. But maybe doing the data and operations engineering things to operationalize the model can be much more efficiently done with another team of people who have the skills to do that. We’re talking about specialization here, really. But there are some other learnings as well. I recently wrote a blog about it. In it, I looked at the modern Toyota production system and started to ask questions around what we could learn about what they have learned, if you like, over the last 70 years or so. It was not just about automation, but also how they went about doing research and development, how they approached tooling, and how they did continuous improvement. We have a lot to learn in those areas. For an awful lot of organizations that I deal with, they haven’t had a lot of experience around such operationalization problems. They haven’t built that part of their assembly line yet. 


What is neuromorphic computing? Everything you need to know about how it is changing the future of computing

First, to understand neuromorphic technology it make sense to take a quick look at how the brain works. Messages are carried to and from the brain via neurons, a type of nerve cell. If you step on a pin, pain receptors in the skin of your foot pick up the damage, and trigger something known as an action potential -- basically, a signal to activate -- in the neurone that's connected to the foot. The action potential causes the neuron to release chemicals across a gap called a synapse, which happens across many neurons until the message reaches the brain. Your brain then registers the pain, at which point messages are sent from neuron to neuron until the signal reaches your leg muscles -- and you move your foot. An action potential can be triggered by either lots of inputs at once (spatial), or input that builds up over time (temporal). These techniques, plus the huge interconnectivity of synapses -- one synapse might be connected to 10,000 others -- means the brain can transfer information quickly and efficiently. Neuromorphic computing models the way the brain works through spiking neural networks. Conventional computing is based on transistors that are either on or off, one or zero.



Quote for the day:

"Every great leader has incredible odds to overcome." -- Wayde Goodall

Daily Tech Digest - December 07, 2020

API3: The Glue Connecting the Blockchain to the Digital World

dAPIs are on-chain data feeds that are comprised of aggregated responses from first-party (API provider-operated) oracles. This allows for the removal of many vulnerabilities, unnecessary redundancies, and middleman taxes created by existing third-party oracle solutions. Further, using first-party oracles leverages the off-chain reputation of the API provider (compare this to the nonexistent reputation of anonymous third-party oracles). See our article “First-Party vs Third-Party Oracles” for a more extended treatise on these issues. Further, dAPIs are data feeds built with transparency. What we mean by this is: you know exactly where the data comes from — this ensures things like data quality as well as independence of data sources to mitigate skewness in aggregated results. Rather than having oracle-level staking — which is impractical and arguably infeasible for reasons alluded to in this article — API3 has a staking pool. API3 holders can provide stake to the protocol. This stake backs insurance services that protect users from potential damages caused by dAPI malfunctions. The collateral utility has the participants share API3’s operational risk and incentivizes them to minimize it. Staking in the protocol also grants stakers inflationary rewards and shares in profits.


Reconciling political beliefs with career ambitions

Data has been on the front lines in recent culture wars due to accusations of racial, gender, and other forms of socioeconomic bias perpetrated in whole or in part through algorithms. Algorithmic biases have become a hot-button issue in global society, a trend that has spurred many jurisdictions and organizations to institute a greater degree of algorithmic accountability in AI practices. Data scientists who’ve long been trained to eliminate biases from their work now find their practices under growing scrutiny from government, legal, regulatory, and other circles. Eliminating bias in the data and algorithms that drive AI requires constant vigilance on the part of not only data scientists but up and down the corporate ranks. As Black Lives Matter and similar protests have pointed out, data-driven algorithms can embed serious biases that harm demographic groups (racial, gender, age, religious, ethnic, or national origin) in various real-world contexts. Much of the recent controversy surrounding algorithmic biases has focused on AI-driven facial recognition software. Biases in facial recognition applications are especially worrisome if used to direct predictive policing programs or potential abuse by law enforcement in urban areas with many disadvantaged minority groups.


Why Data Privacy Is Crucial to Fighting Disinformation

In essence, if you can create a digital clone of a person, you can much better predict his or her online behavior. That’s a core part of the monetization model of social media companies, but it could become a capability of adversarial states who acquire the same data through third parties. That would enable much more effective disinformation. A new paper from the Center For European Analysis, or CEPA, also out on Wednesday, observes that while there has been progress against some tactics that adversaries used in 2016, policy responses to the broader threat of micro-targeted disinformation “lag.” “Social media companies have concentrated on takedowns of inauthentic content,” wrote authors Alina Polyakova and Daniel Fried. “That is a good (and publicly visible) step but does not address deeper issues of content distribution (e.g., micro-targeting), algorithmic bias toward extremes, and lack of transparency. The EU’s own evaluation of the first year of implementation of its Code of Practice concludes that social media companies have not provided independent researchers with data sufficient for them to make independent evaluations of progress against disinformation.” Polyakova and Fried suggest the U.S. government make several organizational changes to counter foreign disinformation.


How to assess the transformation capabilities of intelligent automation

We’re talking about smart, multi-tasking robots that are increasingly being trusted catalysts at the core of digital work transformation strategies. This is because they effortlessly perform joined up, data-driven work across multiple operating environments of complex, disjointed, difficult to modify legacy systems and manual workflows. And unlike any other robot, they deliver work without interruption, automatically making adjustments according to obstacles – different screens, layouts or fonts, application versions, system settings, permissions, and even language. These robots also uniquely solve the age old problem of system interoperability by reading and understanding applications’ screens in the same way humans do. They’re re-purposing the human interface as a machine-usable API – crucially without touching underlying system programming logic. This ‘universal connectivity’ also means that all current and future technologies can be used by robots – without the need of APIs, or any form of system integration. ... This capability breathes new life into any age of technology and enables these robots to be continually augmented with the latest cloud, artificial intelligence, machine learning, and cognitive capabilities that are simply ‘dragged and dropped’ into newly designed work process flows.


Basics of the pairwise, or all-pairs, testing technique

All-pairs testing greatly reduces testing time, which in turn controls testing costs. The QA team only checks a subset of input/output values -- not all -- to generate effective test coverage. This technique proves useful when there are simply too many possible configuration options and combinations to run through. Pairwise testing tools make this task even easier. Numerous open source and free tools exist to generate pairwise value sets. The tester must inform the tool about how the application functions for these value sets to be effective. With or without a pairwise testing tool, it's crucial for QA professionals to analyze the software and understand its function to create the most effective set of values. Pairwise testing is not a no-brainer in a testing suite. Beware these factors that could limit the effectiveness of all-pairs testing: unknown interdependencies of variables within the software being tested; unrealistic value combinations, or ones that don't reflect the end user; defects that the tester can't see, such as ones that don't reflect in a UI view but trigger error messages into a log or other tracker; and tests that don't find defects in the back-end processing engines or systems. 


How can companies secure a hybrid workforce in 2021?

Even before remote work was ubiquitous, accidental and malicious insider threats posed a serious risk to data security. As trusted team members, employees have unprecedented access to company and customer data, which, when left unchecked, can undermine company, customer, and employee privacy. These risks are magnified by remote work. Not only has the pandemic’s impact on the job market made malicious insiders more likely to capture or compromise data to gain leverage with new employment prospects or to generate extra income, but accidental insiders are especially prone to errors when working remotely. For example, many employees are blurring the lines between personal and professional technology, sharing or accessing sensitive data in ways that could undermine its integrity. In response, companies need to be proactive about establishing and enforcing clear data management guidelines. In this regard, communication is key, and accountability through monitoring initiatives or other efforts will help keep data protected during the transition.


Working from home dilemma: How to manage your team, without the micro-management

Employees need to feel connected and trusted. Yet leaders who find it tough to trust their workforce might opt for micro-management; they'll continue to check-up on their workers rather than checking-in to see how they're getting on. Peterson says leaders should look to develop a management style that cultivates wellbeing. In uncertain times, employees need a sense of certainty from their leaders. Executives should ensure their staff feel engaged, not micro-managed. "It's more important than ever for managers to ask whether people are getting their ABCs: their autonomy, belonging and competence. Leaders who don't get that from their own boss will tend to overcompensate with the people they're managing; they'll micro-manage, and that's not helpful," he says. Lily Haake, head of the CIO Practice at recruiter Harvey Nash, agrees that leaders who micro-manage will struggle in the new normal. They won't get the best from the workers and their effectiveness will suffer. Haake says managers who want to cultivate wellbeing need to pick up on subtle signs that all isn't well. Executives should adopt a considered approach, using a technique like active listening, to pick up on potential issues before they become major problems.


The Fourth Industrial Revolution: Legal Issues Around Blockchain

Stakeholders in blockchain solutions will need to ensure that their products comply with a legal and regulatory framework that was not conceived with this technology in mind. From a commercial law standpoint, smart contracts must be contemplated for negotiation, execution and administration on a blockchain, and in a legal and compliant fashion. Liability needs to be addressed. What if the contract has been miscoded? What if it does not achieve the parties' intent? The parties must also agree on applicable law, jurisdiction, proper governance, dispute resolution, privacy and more. There are public policy concerns that should be taken into account in shaping new laws, rules and regulations. For example, permissionless blockchains can be used for illegal purposes such as money laundering or circumventing competition laws. Also, participants may be exposed to irresponsible actions on the part of the "miners" who create new blocks. Unfortunately, there aren't any current legal remedies for addressing corrupt miners. As lawyers and technologists ponder these issues, several solutions are being bandied about. One possible remedy involves a hybrid of permissioned and permissionless blockchains.


Why enterprises are turning from TensorFlow to PyTorch

PyTorch is seeing particularly strong adoption in the automotive industry—where it can be applied to pilot autonomous driving systems from the likes of Tesla and Lyft Level 5. The framework also is being used for content classification and recommendation in media companies and to help support robots in industrial applications. Joe Spisak, product lead for artificial intelligence at Facebook AI, told InfoWorld that although he has been pleased by the increase in enterprise adoption of PyTorch, there’s still much work to be done to gain wider industry adoption. “The next wave of adoption will come with enabling lifecycle management, MLOps, and Kubeflow pipelines and the community around that,” he said. “For those early in the journey, the tools are pretty good, using managed services and some open source with something like SageMaker at AWS or Azure ML to get started.” ... “The TensorFlow object detector brought memory issues in production and was difficult to update, whereas PyTorch had the same object detector and Faster-RCNN, so we started using PyTorch for everything,” Alfaro said. That switch from one framework to another was surprisingly simple for the engineering team too.


Techno-nationalism isn’t going to solve our cyber vulnerability problem

Techno-nationalism is fueled by a complex web of justified economic, political and national security concerns. Countries engaging in “protectionist” practices essentially ban or embargo specific technologies, companies, or digital platforms under the banner of national security, but we are seeing it used more often to send geopolitical messages, punish adversary countries, and/or prop up domestic industries. Blanket bans give us a false sense of security. At the same time, when any hardware or software supplier is embedded within critical infrastructure – or on almost every citizen’s phone – we absolutely need to recognize the risk. We need to take seriously the concern that their kit could contain backdoors that could allow that supplier to be privy to sensitive data or facilitate a broader cyberattack. Or, as is the lingering case with TikTok, the concern is whether the collection of data on U.S. citizens via an entertainment app could be forcibly seized under Chinese law and enable state-backed cyber actors to then target and track federal employees or conduct corporate espionage.



Quote for the day:

"Stand up for what you believe, let your team see your values and they will trust you more easily." -- Gordon Tredgold