Daily Tech Digest - February 24, 2020

Why data literacy needs to be part of a company's DNA

Book and computer technology in library
"Companies with lower levels of data literacy in the workforce will be at a competitive disadvantage," said Martha Bennett, vice president and principal analyst, Forrester. "It's also important to stress that different roles have different requirements for data literacy; advanced firms also understand that increasing data literacy is not a once-and-done training exercise, it's a continuous process." These days, everyone in an organization needs to be data literate, and the organization must establish a well-rounded data literacy program to ensure effective decision making. The programs must address the capacity to collect, analyze, and disseminate data tailored to the needs of diverse organizational roles. "Lack of data literacy puts you at a disadvantage, and can lead to potentially disastrous outcomes," Bennett said, "and we're not just talking about a business context here, the same applies in our personal lives." Numbers play a role in daily decisions, both in business and in our personal lives. Quantitative information must be evaluated, whether it's predicting an event, considering the increased risk of developing disease, how people lean politically, or how popular a product or service is.

5 reasons to choose PyTorch for deep learning

5 reasons to choose PyTorch for deep learning
One of the primary reasons that people choose PyTorch is that the code they look at is fairly simple to understand; the framework is designed and assembled to work with Python instead of often pushing up against it. Your models and layers are simply Python classes, and so is everything else: optimizers, data loaders, loss functions, transformations, and so on. Due to the eager execution mode that PyTorch operates under, rather than the static execution graph of traditional TensorFlow (yes, TensorFlow 2.0 does offer eager execution, but it’s a touch clunky at times) it’s very easy to reason about your custom PyTorch classes, and you can dig into debugging with TensorBoard or standard Python techniques all the way from print() statements to generating flame graphs from stack trace samples. This all adds up to a very friendly welcome to those coming into deep learning from other data science frameworks such as Pandas or Scikit-learn. PyTorch also has the plus of a stable API that has only had one major change from the early releases to version 1.3 (that being the change of Variables to Tensors).

AI: It's time to tame the algorithms and this is how we'll do it

To achieve this objective, the Commission wants to create an "ecosystem of trust" for AI. And it starts with placing a question mark over facial recognition. The organisation said it would consider banning the technology altogether. Commissioners are planning to launch a debate about "which circumstances, if any" could justify the use of facial recognition. The EU's white paper also suggests having different rules, depending on where and how an AI system is used. A high-risk system is one used in a critical sector, like healthcare, transport or policing, and which has a critical use, such as causing legal changes, or deciding on social-security payments. Such high-risk systems, said the Commission, should be subject to stricter rules, to ensure that the application doesn't transgress fundamental rights by delivering biased decisions. In the same way that products and services entering the European market are subject to safety and security checks, argues the Commission, so should AI-powered applications be controlled for bias. The dataset feeding the algorithm could have to go through conformity assessments, for instance. The system could also be required to be entirely retrained in the EU.

Why You Should Revisit Value Discovery

Ecosystem Thinking article 3
There are at least two reasons for the shift. The first is because we are in a digital world. Now the cost of creating new products can be extraordinarily low (a developer, a laptop). And the cost factor has given rise to new methodologies like Lean Startup and concepts like Fail Fast, Fail Cheap. As enterprises adopt these techniques, they push more projects into corporate innovation pipelines. More on the impact of that later. The second reason relates to software development and delivery methods. It is now possible, often necessary, to chunk software into smaller and smaller units of work and push these into a live test environment with users relatively quickly. Both of these approaches are creating problems. They reinforce the view that more is better. And both also reinforce a challenging proposition: enterprises can be experimental laboratories. Are you starting to get the picture? More ideas of dubious and yet-to-be tested value find their way into your workflow! Perhaps enterprises can convert this negative into a positive but to do so means stitching together a value discovery process with very good value management and delivery.

More And More Organizations Injecting Emotional Intelligence Into Their Systems

More and more organizations injecting emotional intelligence into their systems: Study - CIO&Leader
A growing number of organizations are injecting emotional intelligence into their systems. These include AI capabilities, such as machine learning and voice and facial recognition, which can better detect and appropriately respond to human emotion, according to Deloitte’s 11th annual Tech Trends 2020 report. The trends also indicate more and more organizations using digital twins, human experience platforms and new approaches to enterprise finance, which can redefine the future of tech innovation. Deloitte’s 11th annual Tech Trends 2020 report captures the intersection of digital technologies, human experiences, and increasingly sophisticated analytics and artificial intelligence technologies in the modern enterprise. The report explores digital twins, the new role technology architects play in business outcomes, and affective computing-driven “human experience platforms” that are redefining the way humans and machines interact. Tech Trends 2020 also shares key insights and prescriptive advice for business and technology leaders so they can better understand what technologies will disrupt their businesses during the next 18 to 24 months.

7 Tips to Improve Your Employees' Mobile Security

(Image: Mirko -- stock.adobe.com)
"A bit of a trade-off has to happen, as they're managing an aspect of something that is personally owned by the employee, and they're using it for all kinds of things besides work," says Sean Ryan, a Forrester analyst serving security and risk professionals. On nights and weekends, for example, employees are more likely to let their guards down and connect to public Wi-Fi or neglect security updates. Sure, some people are diligent about these things, while some "just don't care," Ryan adds. This attitude can put users at greater risk for phishing, which is a common attack vector for mobile devices, says Terrance Robinson, head of enterprise security solutions at Verizon. Employees are also at risk for data leakage and man-in-the-middle attacks, especially when they hop on public Wi-Fi networks or download apps without first checking requested permissions. Mobile apps are another hot attack vector for smartphones, used in nearly 80% of attacks. A major challenge in strengthening mobile device security is changing users' perception of it. Brian Egenrieder, chief risk officer at SyncDog, says he sees "negativity toward it, as a whole."

Recent ransomware attacks define the malware's new age

Over the past two years, however, ransomware has come back with a vengeance. Mounir Hahad, head of the Juniper Threat Labs at Juniper Networks, sees two big drivers behind this trend. The first has to do with the vagaries of cryptocurrency pricing. Many cryptojackers were using their victims' computers to mine the open source Monero currency; with Monero prices dropping, "at some point the threat actors will realize that mining cryptocurrency was not going to be as rewarding as ransomware," says Hahad. And because the attackers had already compromised their victim's machines with Trojan downloaders, it was simple to launch a ransomware attack when the time was right. "I was honestly hoping that that prospect would be two to three years out," says Hahad, "but it took about a year to 18 months for them to make that U-turn and go back to their original attack." The other trend was that more attacks focused on striking production servers that hold mission-critical data. "If you get a random laptop, an organization may not care as much," says Hahad. "But if you get to the servers that fuel their day-to-day business, that has so much more grabbing power."

To Disrupt or Not to Disrupt?

First, consider the choice of technology. Clayton Christensen long distinguished between disruptive technologies and sustaining technologies (which do not). Most companies pursue sustaining technologies as a way of retaining existing customers and keeping a healthy profit margin. The reason to choose a technology that is “worse” initially is its potential to outperform older technologies in the relatively near future. Moreover, disruptive technologies tend to be what established companies either are not good at or do not want to adopt for fear of alienating their customer base. In other words, the very existence of disruptive technologies represents an opportunity for startups. Which brings us to the choice of customer for a disruptive entrepreneur. Christensen noted that, if you want to sell a product that underperforms existing products in some dimension (say, a laptop with less computing power), you need to find either a way of selling at a discount so that a lack of performance can be compensated for or a set of customers who do not strongly value that performance more than some other feature (for example, longer battery life).

New Wi-Fi chip for the IoT devices consumes 5,000 times less energy

A set of ultra-low power Wi-Fi radios integrated in small chips, each measuring 1.5 square millimeters in area
The invention is based on a technique called backscattering. The transmitter does not generate its own signal, but takes the incoming signals from the nearby devices (like a smartphone) or Wi-Fi access point, modifies the signals and encodes its own data onto them, and then reflects the new signals onto a different Wi-Fi channel to another device or access point. This approach requires much less energy and gives electronics manufacturers much more flexibility. With the tiny Wi-Fi chip, the IoT devices will no longer need to charge frequently or need large batteries, but can also allow smart home devices to work completely wirelessly and even without batteries in some cases. The developers note that the new transmitter will significantly increase the operating time on a single charge of various Wi-Fi battery sensors and IoT devices, including, for example, portable video cameras, smart voice speakers, and smoke detectors. Reducing energy consumption in some cases will allow manufacturers of sensors to make their devices even more compact by switching to using less capacious batteries.

The importance of talent and culture in tech-enabled transformations

Many industrial companies may assume that top technology talent is out of reach and that their brand and even location might prevent them from attracting the kind of people they need. But technology professionals are less biased against industrial companies than might be expected. Only 7.4 percent of the respondents to a 2018 survey of technology professionals considered their employer’s industry important. Compensation, the work environment, and professional development—all factors within an industrial company’s control—were the factors that matter most to technology talent ... One leading North American industrial company looking to embark on a tech-enabled transformation prioritized bringing in a chief digital officer (CDO) who had credibility among technologists. The company hired a CDO who previously had led businesses at major technology companies and was able to attract three leading product managers and designers from similar organizations. The company used these new hires—who were intimately familiar with rapid, user-centric design—to signal its commitment to world-class digital development.

Quote for the day:

"If you care enough for a result, you will most certainly attain it." -- William James

Daily Tech Digest - February 23, 2020

Robots are not the job killers we all feared

Not only can digital workers contribute to a more effective workforce overall, they can also make for happier employees. More often than not, automation relieves employees of the tedious parts of their jobs that take considerable time and effort to accomplish. In return, they have more opportunities to pursue projects they truly enjoy and are passionate about. One example of this is at S&P, where financial journalists produce reports on the businesses they are assigned to cover. Their work to develop insightful analyses was hindered by the need to first write lengthy stock reports, until they leveraged Blue Prism’s connected-RPA to automate stock report production. This has given the journalists more time to produce thoughtful analysis, which is not only a more rewarding part of their roles but is also a more valuable offer to S&P’s clients. In some cases, digital workers are even introduced as part of a broader effort to improve employee happiness and engagement. According to our research, 87% of knowledge workers are comfortable with re-skilling in order to work alongside a digital workforce.

FBI recommends passphrases over password complexity

login screen
For more than a decade now, security experts have had discussions about what's the best way of choosing passwords for online accounts. There's one camp that argues for password complexity by adding numbers, uppercase letters, and special characters, and then there's the other camp, arguing for password length by making passwords longer. This week, in its weekly tech advice column known as Tech Tuesday, the FBI Portland office positioned itself on the side of longer passwords. "Instead of using a short, complex password that is hard to remember, consider using a longer passphrase," the FBI said. "This involves combining multiple words into a long string of at least 15 characters," it added. "The extra length of a passphrase makes it harder to crack while also making it easier for you to remember." The idea behind the FBI's advice is that a longer password, even if relying on simpler words and no special characters, will take longer to crack and require more computational resources. Even if hackers steal your encrypted password from a hacked company, they won't have the computing power and time needed to crack the password.

How the IRS Audits Cryptocurrency Tax Returns

How the IRS Audits Cryptocurrency Tax Returns - Filing Expert Shares Example, Insights on AML Focus
The presence of a new crypto question on 2019’s Schedule 1 form has individuals concerned about reporting their crypto assets correctly more than ever, and according to experts, this is for good reason. “That is massive” says Enrolled Agent Clinton Donnelly of Donnelly Tax Law. “This question in the 2019 return … it forces every taxpayer in the United States to make a decision whether or not they’re going to be honest or not on this question, because its a yes or no and when you sign the tax return … it’s in small print, it says ‘under penalty of perjury I have reviewed this return and it’s true, complete and correct,’ so failing to check the box is incomplete.” Donnelly went on to explain that by reporting crypto gains in light of the new question, many crypto holders will inadvertently reveal that they first acquired their digital assets years back, which calls their previous years’ returns into suspicion and makes an IRS investigation more likely. Donnelly’s service has so far seen two cryptocurrency audits with its clients, and the tax professional is interested in learning more about what triggers an IRS investigation.

Why AI companies don’t always scale like traditional software startups

Businessman trying to fit through a very small door.
For AI companies, knowing when you’ve found product-market fit is just a little bit harder than with traditional software. It’s deceptively easy to think you’ve gotten there – especially after closing 5-10 great customers – only to see the backlog for your ML team start to balloon and customer deployment schedules start to stretch out ominously, drawing resources away from new sales. The culprit, in many situations, is edge cases. Many AI apps have open-ended interfaces and operate on noisy, unstructured data (like images or natural language). Users often lack intuition around the product or, worse, assume it has human/superhuman capabilities. This means edge cases are everywhere: as much as 40-50% of intended functionality for AI products we’ve looked at can reside in the long tail of user intent. Put another way, users can – and will – enter just about anything into an AI app. Handling this huge state space tends to be an ongoing chore. Since the range of possible input values is so large, each new customer deployment is likely to generate data that has never been seen before. Even customers that appear similar – two auto manufacturers doing defect detection, for example – may require substantially different training data, due to something as simple as the placement of video cameras on their assembly lines.

Cloud misconfigurations cost companies nearly $5 trillion

Cloud computing concept on futuristic technology background
"Data breaches caused by cloud misconfigurations have been dominating news headlines in recent years, and the vast majority of these incidents are avoidable," said Brian Johnson, chief executive officer and co-founder of DivvyCloud. Using data from a 2019 Ponemon Institute report that said the average cost per lost record globally is $150, DivvyCloud researchers estimated that cloud misconfiguration breaches cost companies upwards of $5 trillion over those two years. "Breaches caused by cloud misconfigurations have been dominating news headlines in recent years. DivvyCloud researchers compiled this report to substantiate the growing trend of breaches caused by cloud misconfigurations, quantify their impact to companies and consumers around the world and identify factors that may increase the likelihood a company will suffer such a breach," the report said. "Year over year from 2018 to 2019, the number of records exposed by cloud misconfigurations rose by 80%, as did the total cost to companies associated with those lost records," according to the report Unfortunately, the report added, experts expect this upward trend to persist, as companies continue to adopt cloud services rapidly but fail to implement proper cloud security measures.

When Money Becomes Programmable – Part 1

Digital scarcity, when applied to a token such as bitcoin or some other digitally tokenized medium of exchange, allows a new approach to managing our increasingly digitized economy and its micro-economies within. With scarce digital tokens, communities with a common interest in value generation can embed their shared values into the software’s governance and use these meta-assets as instruments of those values. Once they associate scarce tokens with rights to scarce resources, they can develop controls over token usage that help manage that public good. Here’s one hypothetical example: A local government that wants to reduce pollution, traffic congestion, and the town’s carbon footprint might reward households that invest in local solar generation with negotiable digital tokens that grant access to electric mass-transit vehicles but not to toll roads or parking lots. The tokens would be negotiable, with their value tied to measures of the town’s carbon footprint, creating an incentive for residents to use them.

How Fintech Startups Are Disrupting the Payments Industry

How Fintech Startups Are Disrupting the Payments Industry
Banks have invested huge sums to build legacy payment systems. However, financial institutions must now not only design processes and systems that incorporate cutting-edge innovations but also meet higher customer expectations. Legacy infrastructure is incompatible with those of other banks or payment processors. That leads to high fees, long delays and frustration for customers when sending and receiving payments. Tokenization solves the issue of interoperability by leveraging a standard token that participants use to transfer value (or data) quickly and efficiently. In the case of Soramitsu’s Project Bakong, its platform allows participants (i.e. banks) to transact directly using token transfers. This method drastically speeds up settlements by eliminating traditional business processes such as transfer instructions, liquidation and payment confirmations at a later date. Cambodia, Malaysia and Thailand are also experimenting with QR scan codes to improve remittances between these countries. The QR codes are EMVCo compatible and may be used to send and receive payments that are denominated in local currencies.

Banking for Humanity: Technology to Increase the Human Touch

As the Gen Z generation are more concerned with being authentic and persistent, banks will need to understand that there is no difference between offline and online words when it comes to building their omnichannel strategies. Banks can also consider creating educational channels to promote discourse with Gen Z. By digitalising their services, banks can bridge the gap between financial institutions and the older generations as well. Staff can help to assist older customers with self-service devices so that they have greater control over their money. Branch designs also take into consideration the personal consultation aspect that caters to their needs. Likewise, video banking can be used within branches to increase access to financial services and assistance for customers who need help with self-service products and technology whenever they want. A bank’s physical services can be carefully merged with the latest digital technologies.

Understanding the Impact of the Cybersecurity Skills Shortage on Business

womans hands working on laptop reflection of data protection symbol picture id1135823003
The impact of the skills shortage is too powerful to ignore and requires intervention. This is where an effective strategy driven by the CISO comes in. The evolution of the CISO has expanded the role from being a technologist solely focused on managing an organization’s security risks, to also being a business strategist able to reach across organizational boundaries to shape and mobilize resources to enable things like secure digital transformation. In today’s threat landscape, security solutions alone are no longer enough to withstand modern cyber threats. The expanding responsibilities of the CISO and the organizational impact of today’s cybersecurity skills shortage both play a critical role in the success of an organization’s digital transformation efforts and security strategies. While an effective CISO can provide essential guidance, a skills shortage can present uncertainties that can still adversely affect the productivity and morale of the security team – which can directly impact the overall security of the organization. By investing time and efforts into existing team members, security leaders can actively provide more value to their organizations without having to rely solely on seeking new talent.

AI for CRE: Is Cybersecurity A friend or foe?

AI cybersecurity
While AI could help lower cybersecurity spending in terms of money and manpower, it could also cost companies money, too. Last year, Juniper Research predicted that data breaches’ costs would increase from $3 trillion in 2019 to $5 trillion in 2024. A number of factors will play into those costs like lost business, recovery costs and fines, but so will AI. “Cybercrime is increasingly sophisticated; the report anticipates that cybercriminals will use AI, which will learn the behavior of security systems in a similar way to how cybersecurity firms currently employ the technology to detect abnormal behavior,” Juniper’s report said. “The research also highlights that the evolution of deep fakes and other AI-based techniques is also likely to play a part in social media cybercrime in the future.” Security experts have also pointed to this year as to when hackers will start their attacks that leverage AI and machine learning. “The bad [actors] are really, really smart,” Burg of EY Americas told VentureBeat. “And there are a lot of powerful AI algorithms that happen to be open source. And they can be used for good, and they can also be used for bad.

Quote for the day:

"Leadership is, among other things, the ability to inflict pain and get away with it - short-term pain for long-term gain." -- George Will

Daily Tech Digest - February 21, 2020

Cloud-enabled threats are on the rise, sensitive data is moving between cloud apps

cloud-enabled threats
“We are seeing increasingly complex threat techniques being used across cloud applications, spanning from cloud phishing and malware delivery, to cloud command and control and ultimately cloud data exfiltration,” said Ray Canzanese, Threat Research Director at Netskope. “Our research shows the sophistication and scale of the cloud enabled kill chain increasing, requiring security defenses that understand thousands of cloud apps to keep pace with attackers and block cloud threats. For these reasons, any enterprise using the cloud needs to modernize and extend their security architectures.” 89% of enterprise users are in the cloud, actively using at least one cloud app every day. Cloud storage, collaboration, and webmail apps are among the most popular in use. Enterprises also use a variety of apps in those categories – 142 on average – indicating that while enterprises may officially sanction a handful of apps, users tend to gravitate toward a much wider set in their day-to-day activities. Overall, the average enterprise uses over 2,400 distinct cloud services and apps.

Move beyond digital transformation — and improve your ROI

How do you achieve value across an entire digital enterprise and make sure all investments give you that coveted, but sometimes elusive, ROI? You need to do more than transform. You need to transcend traditional approaches to growth and change. As part of PwC’s 2020 Global Digital IQ research, we studied thousands of companies and their digital behaviors. We found that just 5 percent are getting moderate or significant payback from their digital efforts in all areas measured: growth, profits, innovation, customer experience, brand lift, attracting and retaining talent, disrupting their own industry, using data to improve decisions, cutting costs, and combating new industry entrants. This elite group of companies — what we call Transcenders —achieve real payback across their enterprises. They embrace innovation, and they don’t fear change. If this were high school, they’d reign as prom queen, star quarterback, and valedictorian all rolled into one. What does it take to transcend? Four core differentiators deliver consistent, standout performance. And they’re elements many leaders talk about but don’t always act on or get full value from.

Head shot serious puzzled African American businessman looking at laptop
It's easy to see a digital transformation business strategy as a fun-filled ride into the future and envision the onslaught of high-fives when new technology (and the associated technology leader) have repositioned the company for change, growth and becoming a digital business. Reading the marketing pitch on digital transformation, it's easy to assume that you buy the right technology and perhaps some services, and after a few months, you arrive in the land of rainbows and unicorns. What's often left out of these stories are two salient facts. First, technology by itself has rarely transformed a business. Kodak invented many of the core technologies for digital photography, but chose to shelve them for a variety of reasons, not the least of which was a concern about cannibalizing its core business. The DVD was a widely available technology, but using it to create a novel business model of sending movies by mail helped Netflix-- with its super-easy customer experience--overtake video giant Blockbuster, who clung to its store-based ways.

Home Affairs pushes back against encryption law proposals

The Independent National Security Legislation Monitor (INSLM), Dr James Renwick, went further during public hearings in Canberra this week. Not only did he propose tougher independent oversight of TOLA actions, he repeatedly expressed his concern that the Attorney and the Minister didn't constitute an independent "double lock" for authorising TCNs. Such a double lock is required in the UK, where the equivalent to a TCN must be approved by both the Secretary of State for Home Affairs and the independent Investigatory Powers Commissioner's Office (IPCO). "Leaving aside the personalities and the people who might fill those offices from time to time, nevertheless the Attorney and the Minister for Communications are both members of the same government and the same cabinet," Renwick said on Friday. "There's at least some administrative law which suggests that in those circumstances, they might both be bound by a cabinet decision." Hamish Hansford, DHA's Acting Deputy Secretary for Policy, rejected that view. "Notwithstanding both an Attorney and Minister for Communications are members of a cabinet, they are also independent decision-makers under statute, and they need to exercise those responsibilities independently, if you like," he said.

Looking at the future of identity access management (IAM)

MFA is already popular among some enterprise technologies and consumer applications handling sensitive, personal data (e.g., financial, healthcare), and will continue to transform authentication attempts. A lot has been said about increased password complexities, but human error is still persistent. The addition of MFA immediately adds further security to authentication attempts by having the user enter a temporarily valid pin code or verify their identity by other methods. An area to watch within MFA is the delivery method. For example, SMS notifications were the first stand-out but forced some organizations to weigh added costs that messaging might bring on their mobile phone plans. SMS remains prevalent, but all things adapt, and hackers’ increased ability to hijack these messages have made their delivery less secure. Universal one-time password (OTP) clients, such as Google Authenticator, have both increased security and made the adoption of MFA policies much easier through time-sensitive pin codes. Universal OTPs also do away with the requirement for every unique resource to support its own MFA method.

Forget the Internet of Things. Here’s what IoT really stands for

Intelligence of things looks less like the restroom in Ethiopia, and more like Hartsfield-Jackson Atlanta International Airport, where the world’s largest toilet maker, Toto, has taken things a step further. There, too, the bathrooms are studded with sensors, from the urinals to the faucets. But they don’t just flush automatically, they all report back to central cloud database. The volume of data is astounding – a single toilet may flush 5,000 times per day. In aggregate, the airport can use this data to predict “rush hour” for the airport bathrooms, and deploy custodians before and after to make sure the toilets are clean, the paper towels are stocked, and everything’s running smoothly. “The last decade was about connectivity, and we describe that dynamic with the Internet of Things,” Steve Koenig, vice president of research at the Consumer Technology Association, told Digital Trends. “This decade is really about adding intelligence to different devices, services, etc. We’re confronted with a new IoT: The intelligence of things.” ... “Without intelligence, there is no value,” Kiva Allgood, head of IoT and automotive at Ericsson, told Digital Trends.

How healthcare CIOs can keep their organisations secure

How healthcare CIOs can keep their organisations secure image
For healthcare environments, ransomware poses one of the scariest types of threats in the entire cyber security arena. Physicians-in-training get a taste of the potential reality during routine training exercises at Maricopa Medical Center. As trainees attempt to use diagnostic equipment, like CT scanners, in resuscitating “patient” dummies, they’re greeted with ransomware lockout messages onscreen demanding Bitcoin payments before the equipment can be used again. They must use their intuition to treat the patient instead of the correct equipment. The price for this can be (again, this is a dummy patient) serious brain damage. The Internet of Things (IoT) unlocks huge potential for organisations, including healthcare entities. But this dependence on internet-connected infrastructure also poses a risk. Avoiding ransomware attacks in healthcare requires a multifaceted approach ... The Health Insurance Portability and Accountability Act (HIPAA) was an important step forward for healthcare security and organisations as well as patients.

Cybersecurity: Hacking victims are uncovering cyberattacks faster

"The buzz around the topic leading up to the GDPR deadline helped to get it in front of senior execs outside of the IT team. Many of them saw the importance of GDPR compliance and they supported measures to improve defences and breach identification," Grout said. While the legislation only applies to the European Union, the impact is also felt by global organisations that do business or transfer data in Europe. That appears to have had an impact on the median dwell time across the globe, which is down from 78 days to 56 days. However, one in ten FireEye investigations still involve organisations that had cyber attackers intruding on the network for over two years, indicating that cyber criminals -- and in some cases, nation-state backed hacking operations -- can still remain very stealthy when compromising networks. "Some of them are being targeted by highly skilled APT [Advanced Persistent Threat] groups that are able to hide themselves for a long time after the initial breach," said Grout. One of the most common weaknesses exploited by attackers -- as identified in the report -- is the failure to enforce multi-factor authentication (MFA) on the enterprise network. A lack of MFA means that cyber criminals who successfully breach or steal passwords can easily gain access to networks.

AI’s bias problem: Why Humanity Must be Returned to AI

If an AI system is built in a contrived laboratory environment with data that isn’t representative of the target audience, or worse, patterns in the data reflect prejudice, the AI’s decisions will also be prejudiced. It is incredibly difficult for algorithms to ‘unlearn’ these patterns, so it is important that biases are not built into the algorithm from the first phases of implementation. Origins of bias can be nuanced and hard to spot, ranging from historic impartialities based on race and gender, to a lack of diversity within training sets. As a consequence, certain groups are disproportionately represented. A study by the National Institute of Standards and Technology (NIST) found that facial recognition misidentified African-American and Asian faces ten to 100 times more than Caucasians, while Native Americans were misidentified more than any other group. The study also revealed that women were falsely identified over men, and senior citizens had more than 10 times the issues faced by middle-aged adults.  According to a report by AI Now Institute at New York University, the lack of diverse training data also threatens to worsen the historic underemployment of disabled people.

Achieving SOC 2 Compliance in DevOps

If you are wondering whether AWS complies with SOC 2 at this point, you are not alone. AWS as a cloud environment is designed to comply with SOC 2 requirements; at the very least, the ecosystem offers tools that make compliance easy. SOC 2 compliance is something that AWS takes seriously. In fact, AWS keeps the location of data centers confidential to ensure maximum security. It also offers high resilience with multiple redundancies and automated disaster recovery measures. Through AWS Artifact, you can gain access to all SOC reports, including SOC 2 Security, Availability, and Confidentiality Reports generated by AWS. All controls are provided and you have the complete services in scope list for maximum compliance. AWS has an extensive set of tools for maintaining controls and ensuring compliance. Amazon CloudWatch is a good example of a comprehensive monitoring tool that you can use across the AWS ecosystem. The same is true for AWS CloudTrail and Amazon GuardDuty. You also have AWS Shield offering security measures that are ready to deploy.

Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray

Daily Tech Digest - February 18, 2020

Artificial Human Beings: The Amazing Examples Of Robotic Humanoids And Digital Humans

Artificial Human Beings: The Amazing Examples Of Robotic Humanoids And Digital Humans
Digital human beings are photorealistic digitized virtual versions of humans. Consider them avatars. While they don't necessarily have to be created in the likeness of a specific individual (they can be entirely unique), they do look and act like humans. Unlike digital assistants such as Alexa or Siri, these AI-powered virtual beings are designed to interact, sympathize, and have conversations just like a fellow human would. Here are a few digital human beings in development or at work today: Neons: AI-powered lifeforms created by Samsung’s STAR Labs and called Neons include unique personalities such as a banker, K-pop star, and yoga instructor. While the technology is still young, the company expects that, ultimately, Neons will be available on a subscription basis to provide services such as a customer service or concierge. Digital Pop Stars: In Japan, new pop stars are getting attention—and these pop stars are made of pixels. One of the band members of AKB48, Amy, is entirely digital and was made from borrowing features from the human artists in the group. Another Japanese artist, Hatsune Miku, is a virtual character from Crypton Future Media.

Edge computing enables near-real-time application engagements. While local computing is not new, edge computing has emerged because technologies, such as content delivery networks and local edge devices and gateways, can now aggregate IoT sensor and mobile device insights to enable on-demand actions where people and physical processes exist, need them, and benefit from them. Want to dramatically improve customer experience, employee experience, and business achievements? This is powerful empowerment. Edge computing architectures have three major building blocks. Edge computing varies across different solution use cases and value scenarios, so it's difficult to define just a single pattern for everyone. Forrester's research does find three general building blocks core to most scenarios: edge management layers, edge networks, and edge intelligence fabric software. Enterprise and government use cases and case studies of how your peers are empowering their customers and advancing their market value with these empowering technologies. Functions and components of edge computing and the vendor landscape across all industries and the services already offered.

It isn’t just the engineering team that should focus on developing the product offering or key consumer touchpoints. Employees across the organisation are valuable as they all interact with different stages of the customer journey, and can provide valuable insights into pain points. They are capable of delivering a constant flow of new ideas to improve the digital customer experience, asking what will help to add value for your customers while engineering teams actually integrate a process to make it a reality. It’s no longer about the waterfall approach of working in segments, but rather coming together as a collaborative business and empowering the devops team to make the technical decisions needed to make the ideas a reality.  Never underestimate the importance of collaboration in innovation. Giving employees at all levels the opportunity to get involved with their own ideas, perhaps via collaborative brainstorming sessions with the engineering team, can mean the risk of analysis paralysis will be averted, as everyone is involved from the beginning. It is essential for the management team to provide employees with not only the opportunity to share their thoughts about ways to develop the business, but the training to help them use their data and technology to bring these ideas to life.

Bala is right to call out that one of the primary benefits of a serverless and "single-purpose microservices" is that "You can use the right tool for the right job rather than being constrained to one language, one framework or even one database." This is immensely freeing for developers, because now instead of writing monolithic applications that likely have very low utilization with spiky workloads, they can build microservices tied to ephemeral serverless functions. When the system is idle, it shuts down and costs nothing to run. Everyone wins. This also can make maintaining code more straightforward. For monolithic applications, updating code can present a major burden because of the difficulty inherent in covering all dependencies. As Ophir Gross has noted, "Spaghetti code is full of checks to see what interface version is being used and to make sure that the right code is executed. It's often disorganized and usually results in higher maintenance efforts as changes in code affect functionality in areas that are challenging to predict during development stages."

DDoS Attacks Nearly Double Between Q4 2018 and Q4 2019

DDoS attackers continued to leverage non-standard protocols for amplification attacks in the last quarter of 2019, researchers found. Adversaries have also adopted Apple Remote Management Service (ARMS), part of the Apple Remote Desktop (ARD) application for remote administration. This tactic was first spotted in June 2019; by October, attacks were widespread. The fourth quarter of 2019 brought multiple high-profile DDoS attacks, including threats against financial organizations in South Africa, Singapore, and nations across Scandinavia. DDoS attacks aimed to cause disruption for the United Kingdom's Labour party and also targeted Minecraft servers at the Vatican. In a more recent case, just last week the FBI warned of a potential DDoS attack targeting a state-level voter registration and information site. "This demonstrates that DDoS is still a common attack method among cybercriminals driven by ideological motives or seeking financial gain, and organizations should be prepared for such attacks and have a deep understanding of how they evolve," researchers said in a statement.

Keeping up with disruptors through hybrid integration

For consumers of the digital era, experience is everything. They expect newfound convenience and flexibility and will have no problem looking elsewhere if this cannot be provided. This begs the question: how can the traditional players hope to keep up if this is the case? However, things aren’t as complex as they seem. One reason these new companies can drive such positive results comes down to the fact there is no reliance on legacy databases, and they can take advantage of existing third-party systems. For example, Citymapper leverages open data from the Transport of London to retrieve journey information and provide real-time visibility over transport schedules, allowing customers to make the best choice of journey based on timings. Meanwhile, Uber uses Google’s APIs to run their mapping software and match customers with the drivers closest to them. From there, the data is stored and used to predict supply and demand, as well as set fares.  In both cases, these services have been built on existing integrations, meaning they don’t run into the same problems as many of the established players.

What Does Facebook's Quite AI Acquisitions Across UK Signify?

Amid all the controversies and roadblocks in its strive to attain AI leadership, the company is moving forward with innovation and tech developments. These developments are a major result of its acquisitions; small but significant. Facebook’s M&A activities are proving to be quite beneficial in its AI journey. Recently, the company acquired Scape Technologies which is a London-based computer vision startup working on location accuracy beyond the capabilities of GPS. Full terms of the deal remain as yet unknown, although a Companies House update reveals that Facebook Inc. now has majority control of the company (more than 75%). Further, a regulatory filings show that Scape’s previous venture capital representatives have resigned from the Scape board and are replaced by two Facebook executives. ... Meanwhile, the acquisition by Facebook, no matter what form it takes, looks like a good fit given the US company’s investment in next-generation platforms, including VR and AR. It is also another — perhaps, worrying — example of US tech companies hoovering up UK machine learning and AI talent early.

Why AI systems should be recognized as inventors

It’s important to note that the Artificial Inventor Project doesn’t want AI systems to own the patents for their creations. Such an interpretation of the case confuses ownership of patent rights with inventorship. Hence the DABUS applications list the AI as the inventor, with the AI’s owner listed as the patent applicant and prospective owner of any issued patents. It will be many years before they learn the full outcome of their applications. The team is appealing the rulings of both the EPO and the UK IPO. Other decisions in jurisdictions including the US, Germany, Israel, Taiwan, China, Korea are still pending, as well as one filed under the Patent Cooperation Treaty, which facilities the patent application process in more than 150 states. The World Intellectual Property Organization and the United States Patent and Trademark Office, meanwhile, have both requested comments on how they could develop policies for such applications. They may need to address any ambiguity over who owns the patents for AI-generated inventions when both the creator of the system and an individual user have contributed to its output. But granting ownership to the person who made the AI operable may be the most straightforward solution.

Mac attacks on the rise

"We saw a significant rise in the overall prevalence of Mac threats in 2019, with an increase of over 400% from 2018,'' the report by Malwarebytes Labs stated. Part of that increase can be attributed to an increase in its Malwarebytes for Mac user base, the report noted. To see if that increase reflected what was actually happening in the Mac threat landscape, Malwarebytes said, it examined threats per endpoint on both Macs and Windows PCs. "In 2019, we detected an average of 11 threats per Mac endpoint--nearly double the average of 5.8 threats per endpoint on Windows,'' the report said. Another key finding was that overall, consumer threat detections were down by 2% from 2018, but business detections increased by 13% in 2019, the report said. This resulted in a mere 1% increase in threat volume year-over-year. The sophistication of threat capabilities in 2019 increased, with many using exploits, credential stealing tools, and multi-stage attacks involving mass infections of a target, the report said. While seven of 10 top consumer threat categories decreased in volume, HackTools--a threat category for tools used to hack into systems and computers--increased against consumers by 42% year-over-year, bolstered by families such as MimiKatz, which also targeted businesses, the report said.

4 principles of analytics you cannot ignore

Data are a resource. If you are not analyzing it, it is an unused resource. At SAS, we often say, “Data without analytics is value not yet realized.” Naturally, then, wherever there is data, there needs to be analytics. But what does that mean today when we are generating more data and more diverse data than ever before? And all of that data streams or moves about many different networks. The first principle of analytics is about bringing the right analytics technology to the right place at the right time. Whether your data are on-premises, in a public or private cloud, or at the edges of the network – analytics needs to be there with it. ... You should pay great attention to the quality, robustness and performance of your algorithms. But the value of analytics is not in the features and functions of the algorithm – not anymore. The value is in solving data-driven business problems. The analytics platform is a commodity – everybody has algorithms. But operationalizing analytics is not a commodity. Everybody is challenged with bringing analytics to life. When you deploy analytics in production, it drives value and decisions.

Quote for the day:

"To be able to lead others, a man must be willing to go forward alone." -- Harry Truman

Daily Tech Digest - February 16, 2020

Is Your Cybersecurity Workforce Ready To Win Against Cybercriminals?

A trained staff is a critical business asset when it comes to handling information security projects. Whether your company is involved in a simple privileged access management (PAM) project or implementing a complex continuous adaptive risk and trust assessment (CARTA-based) strategy design, success depends on employee competency. Now that you have a training plan, implement it by assigning specific information security training certifications or training modules to each employee and measure the effectiveness and quality of execution against your business goals. ... The ultimate goal is to foster a cybersecurity culture across the organization. This is a tough task because it involves the human aspects of cybersecurity. Be prepared for resistance, and plan efforts to address employee concerns in an understanding and open manner. Empathy will get you to your goals faster than issuing strict directives and hoping employees will follow. Make cybersecurity practices a routine part of your business processes as well as strategic concerns. This 360-degree approach will become your best defense against information security risks.

For enterprise developers attempting to meet the highly specialized needs of a vertical and tech-savvy users' expectations, low-code platforms are a way to handle the scalability, data management, architecture and security concerns that hold back internal bespoke software projects. To be worth the money, a low-code platform must be flexible enough to build almost any app securely, even if it's only for internal users, said AbbVie's Cattapan. Low-code examples at the company range from a shipment management app to track chemicals around its labs and manufacturing campus, to a reporting app related to drug approval rules in more than 200 countries. To work for these purposes, a low-code platform has to scale in diverse situations: "We might have a really large dataset ... and we want the app server next to the data, but we also want the option to have it up in the cloud," Cattapan said. 

4 in Chinese Army Charged With Breaching Equifax
While many of the security issues at Equifax in 2017 have been discussed in lawsuits, investigations and news media reports, the new indictments offer some additional details of what happened staring in May of that year. After exploiting the vulnerability in Apache Struts, the hackers allegedly gained access to Equifax's online dispute portal in order to gain a foothold within the corporate network and steal more credentials, according to the indictment. After that, the four hackers spent several weeks mapping the network and running queries to understand what databases they could access and which ones held the personal data and intellectual property they were seeking, the indictment says. The hackers ran about 9,000 queries within the network over the course of several months, it adds. "Once they accessed files of interest, the conspirators then stored the stolen information in temporary output files, compressed and divided the files, and ultimately were able to download and exfiltrate the data from Equifax's network to computers outside the United States," prosecutors say.

How Edge Computing Is Supercharging the Internet of Things
Though many may imagine servers as rows of tall, boxy machines, in recent years servers have gone mobile, enabling edge computing on the road. Vehicle servers are a boon to law enforcement officers, who can avoid spending precious time on tasks such as manually keying in a license plate number to check suspicious vehicles. Police cruisers equipped with servers such as NEXCOM's MVS series of vehicle servers powered by Intel® Core and Intel Atom processors can quickly decode images of cars taken by a cruiser's rooftop camera, identify license plates, and determine whether they're listed in a database of vehicles of interest to law enforcement. ... Machines can see what humans miss. Imagine a failing motor on a factory floor begins to vibrate more quickly. That initial, negligible acceleration won't be noticeable to workers. But an electronic vibration sensor detects it, triggering analysis by predictive maintenance software. The software notifies personnel, who address the problem before it leads to a costly equipment breakdown. Edge computing helps manufacturers make the most efficient use of predictive maintenance technology.

IBM highlights new approach to infuse knowledge into NLP models

There have been two schools of thought or "camps" since the beginning of AI: one has focused on the use of neural networks/deep learning, which have been very effective and successful in the past several years, said David Cox, director for the MIT-IBM AI Watson Lab. Neural networks and deep learning need data and additional compute power to thrive. The advent of the digitization of data has driven what Cox called "the neural networks/deep learning revolution." Symbolic AI is the other camp and it takes the point of view that there are things you know about the world around you based on reason, he said. However, "all the excitement in the last six years about AI has been about deep learning and neural networks,'' Cox said. Now, "there's a grouping idea that just as neural networks needed something like data and compute for a resurgence, symbolic AI needed something,'' and the researchers theorized that maybe what it needs is neural networks, he said. There was a sense among researchers that the two camps could complement each other and capitalize on their respective strengths and weaknesses in a productive way, Cox said.

The Kongo Problem: Building a Scalable IoT Application with Apache Kafka

Kafka is a distributed stream processing system which enables distributed producers to send messages to distributed consumers via a Kafka cluster. Simply put, it’s a way of delivering messages where you want them to go. Kafka is particularly advantageous because it offers high throughput and low latency, powerful horizontal scalability, and the high reliability necessary in production environments. It also enables zero data loss, and brings the advantages of being open source and a well-supported Apache project. At the same time, Kafka allows the use of heterogeneous data sources and sinks – a key feature for IoT applications that can leverage Kafka to combine heterogeneous sources into a single system. In order to achieve high throughput, low latency and horizontal scalability Kafka was designed as a "dumb" broker and a "smart" consumer. This results in different trade-offs in functionality and performance compared to other messaging technologies such as RabbitMQ and Pulsar

Deep Instinct nabs $43M for a deep-learning cybersecurity solution that can suss an attack before it happens

GettyImages 1079200304
“Deep Instinct is the first and currently the only company to apply end-to-end deep learning to cybersecurity,” he said in an interview. In his view, this provides a more advanced form of threat protection than the common traditional machine learning solutions available in the market, which rely on feature extractions determined by humans, which means they are limited by the knowledge and experience of the security expert, and can only analyze a very small part of the available data (less than 2%, he says). “Therefore, traditional machine learning-based solutions and other forms of AI have low detection rates of new, unseen malware and generate high false-positive rates.” There’s been a growing body of research that supports this idea, although we’ve not seen many deep learning cybersecurity solutions emerge as a result (not yet, anyway). He adds that deep learning is the only AI-based autonomous system that can “learn from any raw data, as it’s not limited by an expert’s technological knowledge.” In other words, it’s not based just on what a human inputs into the algorithm, but is based on huge swathes of big data, sourced from servers, mobile devices and other endpoints, that are input in and automatically read by the system.

What Differentiates AI Leaders, According To A Founder Of Globant

Given that AI is so laden with ambiguity, companies often lack clarity in terms of determining what AI can do for them and how they can build roadmaps that will empower them to most effectively implement the technology. What’s more, half of the organizations don’t have a clear definition of how employees and AI will most productively work together. In order to succeed, organizations must work to define the role of AI in their workplace and the ideal relationship between AI and employees. Armed with this knowledge, organizations will be primed to adopt the most appropriate AI solution for their business and customer needs. Recognizing that companies face an uphill battle to understand how AI can help them realize their organizational objectives, Globant has embraced a unique organizational structure called “Agile Pods.” Pods are multidimensional teams comprised of members from Globant’s various Studios that serve as customer-facing service delivery teams and help ensure that its solutions are built and implemented with a customer-first mindset.

Rethinking change control in software engineering

Programmers that make mistakes with their conditional feature flags could accidentally deploy a change to production when it is supposed to stay dark, which means they might not be able to roll it back -- not easily, at least. The key to using feature flags is to place them where they make sense and to diligently make smart decisions regarding the risk they create. A key issue in change control in software engineering is figuring out who change control affects and how it affects them. If nearly everyone is affected by a change -- a likely scenario for teams contributing to a single mobile app deployment -- there tends to be heavy regression testing, triage meetings, go/no go meetings and documentation. This bureaucratic process often adds cost and delays, and it can be difficult to see where exactly the process provides value. One way to cut away barrier-inducing change control processes is to isolate the impact of changes.

5 biggest mistakes developers can make in a job interview

Successfully passed job interview
Interviews can be nerve-racking, but developers must avoid letting that apprehension take over their thought processes, said Tom├ís Pueyo, vice president of growth at Course Hero. "The biggest mistake I see when interviewing tech candidates is jumping to solutions before understanding the problem," Pueyo said. "Candidates are eager to answer questions, so they believe the faster they come up with a solution, the cleverer they will sound. But this is not what our job is about." "In tech, we deal with massive amounts of data, solving problems that are frequently unclear. A key marker of wisdom is taking a step back, gathering all the available information, understanding it, and only then jumping to solutions," Pueyo added. While interviews do focus on questioning the interviewee, the candidate should also have their own questions prepared, Hill said. "As a hiring manager, I expect the candidate to come with their own questions. That's how I know that they're enthusiastic about the company, and that they're eager to learn and improve," Hill noted.

Quote for the day:

"Leadership is about carrying on when everyone else has given up" -- Gordon Tredgold

Daily Tech Digest - February 15, 2020

How Can Companies Minimize Risk Against Emerging Threats?

It's estimated that there is a ransomware attack every 14 seconds somewhere in the world. By far, the single greatest vulnerability that companies continue to face is the infiltration of malware from phishing campaigns. Other vulnerabilities stem from the proliferation of IoT components, cloud storage and computing, and new data and financial apps that external vendors provide and install on the organization's system. To battle the threat, I believe a dedicated effort must go all the way up to the C-level to ensure that everyone is put to the task because when an intrusion attempt succeeds, it's already too late. It can take hackers as little as 19 minutes to get into a system and up to eight hours for many companies to respond due to their obligation to internal processes. Many larger companies install a variety of specialized solutions to protect themselves in different areas, and it seems that endless products answer very specific threats. Too often, though, that buildup of solutions from a multitude of vendors exacerbates the risk that each patch is intended to guard against.

Emotion AI researchers say overblown claims give their work a bad name

Emotion recognition, also known as affective computing, is still a nascent technology. As AI researchers have tested the boundaries of what we can and can’t quantify about human behavior, the underlying science of emotions has continued to develop. There are still multiple theories, for example, about whether emotions can be distinguished discretely or fall on a continuum. Meanwhile, the same expressions can mean different things in different cultures. In July, a meta-study concluded that it isn’t possible to judge emotion by just looking at a person’s face. The study was widely covered, often with headlines suggesting that “emotion recognition can’t be trusted.” Emotion recognition researchers are already aware of this limitation. The ones we spoke to were careful about making claims of what their work can and cannot do. Many emphasized that emotion recognition cannot actually assess an individual’s internal emotions and experience. It can only estimate how that individual’s emotions might be perceived by others, or suggest broad, population-based trends.

AIoT – Convergence of Artificial Intelligence with the Internet of Things

Large volumes of confidential company information and user data are tempting targets for dark web hackers as well as the global government entities. The high level of risk has also brought in newer and more responsibilities that accompany the increased capability. Sensors are now applied to almost everything. This indicates that infinitely more data can be collected from every transaction or process in real-time. IoT devices are the front line of the data collection process in manufacturing environments and also in the customer service departments. Any device with a chipset can potentially be connected to a network and begin streaming data 24/7. Complex algorithms allow performing predictive analytics from all conceivable angles. Machine learning (ML), a subset of AI, continues to upgrade workflows and simplify problem-solving. Companies now capture all the meaningful data surrounding their processes and problems to develop specific solutions for real challenges within the organization, improving efficiency, reliability, and sustainability. 

8 steps to being (almost) completely anonymous online

9 steps to make you completely anonymous online
The universe believes in encryption, a wise man once opined, because it is astronomically easier to encrypt than it is to brute force decrypt. The universe does not appear to believe in anonymity, however, as it requires significant work to remain anonymous. We are using privacy and anonymity interchangeably, and this is incorrect. An encrypted message may protect your privacy — because (hopefully) no one else can read it besides you and your recipient — but encryption does not protect the metadata, and thus your anonymity. Who you're talking to, when, for how long, how many messages, size of attachments, type of communication (text message? email? voice call? voice memo? video call?), all this information is not encrypted and is easily discoverable by sophisticated hackers with a mass surveillance apparatus, which is most these days. A final thought before we dig into specific technical tools: "Online" is now a meaningless word. Meatspace and cyberspace have merged. We used to live in the "real world" and "go online."

MIT finds massive security flaws with blockchain voting app

MIT researchers released a lengthy paper on Thursday that said hackers could change votes through the app, which has already been used in Oregon, West Virginia, Washington and Utah since 2018. "Their security analysis of the application, called Voatz, pinpoints a number of weaknesses, including the opportunity for hackers to alter, stop, or expose how an individual user has voted," MIT said in a news release. Additionally, the researchers found that Voatz' use of a third-party vendor for voter identification and verification poses potential privacy issues for users," the MIT press release said. In a blog post and call with reporters, Voatz defended its security practices and disputed the claims made by the MIT researchers. The company said the research paper was based on an "old version" of the app and that because of this, many of their claims were invalid.  "Voatz has worked for nearly five years to develop a resilient ballot marking system, a system built to respond to unanticipated threats and to distribute updates worldwide with short notice.

The time is now: How to manufacture your smart factory with Industrial IoT

Although the value of digital innovation is apparent, widespread adoption has been slow. This is due to a myriad of challenges. For many organisations, the biggest challenge is available talent — they simply don’t have the internal expertise to plan and execute digital innovation initiatives. With continued strain on IT budgets, organisations struggle to both manage the priorities of today and invest in the talent needed to help them transform their business. A new report by PwC identified hiring more Internet of Things (IoT) engineers and data scientists – while training the wider workforce in digital skills – as a key change CEOs must implement if they want to maximise the benefits from digitisation of manufacturing. Legacy technology is another factor holding manufacturers back. The average factory today is 25 years old, according to McKinsey, with machinery that’s approaching nine years old. Before any plans of integrating the IoT can begin at these plants, they must first upgrade equipment to enable digital readiness. Driven by immediate goals of reducing costs and returns, some manufacturing companies have deferred technology investment.

Microsoft's Windows Terminal: This is the final preview of its new command-line tool

This update brings new command-line arguments, such as the 'wt' execution alias. Users can now launch Terminal with new tabs and split panes, which open with preferred profiles and directories.  Terminal developers point out that the 'wt' design was "heavily inspired by that of the venerable and beloved GNU screen competitor" called tmux, a terminal for Unix-like systems. "You can wt new-tab, wt split-pane, wt new-tab -p Debian ; split-pane -p PowerShell until your heart's content," says Dustin Howett, an engineer lead at Microsoft. .. This release also has some goodies for PowerShell Core fans, with Terminal now automatically finding PowerShells on a system. "The Windows Terminal will now detect any version of PowerShell and automatically create a profile for you," explains Kayla Cinnamon, Windows Terminal program manager. "The PowerShell version we think looks best (starting from highest version number, to the most GA version, to the best-packaged version) will be named as 'PowerShell' and will take the original PowerShell Core slot in the dropdown."

Machine learning could lead cybersecurity into uncharted territory

Security threats are evolving to include adversarial attacks against AI systems; more expensive ransomware targeting cities, hospitals, and public-facing institutions; misinformation and spear phishing attacks that can be spread by bots in social media; and deepfakes and synthetic media have the potential to become security vulnerabilities. In the cover story, European correspondent Chris O’Brien dove into how the spread of AI in security can lead to less human agency in the decision-making process, with malware evolving to adapt and adjust to security firm defense tactics in real time. Should costs and consequences of security vulnerabilities increase, ceding autonomy to intelligent machines could begin to seem like the only right choice. We also heard from security experts like McAfee CTO Steve Grobman, F-Secure’s Mikko Hypponen, and Malwarebytes Lab director Adam Kujawa, who talked about the difference between phishing and spear phishing, addressed an anticipated rise in personalized spear phishing attacks ahead, and spoke generally to the fears — unfounded and not — around AI in cybersecurity.

Cloud Threat Report Shows Need for Consistent DevSecOps

Image: areebarbar - Adobe Stock
Despite efforts to educate developers on the importance of security, he says most developers believe their top priority is getting new features and functionality out as quickly as possible. “Yes, they’re supposed to engineer-in security but that doesn’t happen in many cases,” Chiodi says. “Many organizations have not yet embraced the concept of DevSecOps.” Unit 42’s research shows that forward leaning organizations such as consumer companies want to operate with cloud-scale, serving a multitude of users, while maintaining security. Chiodi cites Netflix as a company that does so because it fully integrated development, security, and operations. He suggests that security teams should also embrace infrastructure as code to automatically put written security policies into code. “That way when a developer creates a new cloud environment, if it has security standards coded right in, every time they create from that template it will be the same every time,” he says. Conversely, Chiodi says a template with vulnerabilities will repeat those vulnerabilities each time it is applied.

Election hacking: is it the end of democracy as we know it?

Election hacking: is it the end of democracy as we know it? image
According to David Emm, senior security researcher at Kaspersky Lab, “the term ‘hacking’ often gets used loosely to refer to different attempts to interfere in elections. These include using social media to try and shape opinions or stealing data held on compromised computers to try and shame political figures, as well as tampering directly with machines used to manage the voting process.” Mateo Meier, the founder and CEO of Artmotion, a cloud security company, agrees that “threat actors will use all available tools at their disposal to hack the outcome [of an election]. So it’s always likely to be a multi-pronged approach rather than a single data breach during election season.” In recent years, governments have made some serious accusations, and researchers have demonstrated how vulnerabilities in voting machines can be targeted. “Such vulnerabilities have also been seen in the real-world, with NSW election results being challenged over [the] iVote security flaw. Yet, it’s difficult to gauge the impact a successful real world attack would have.

Quote for the day:

"Leaders need to be optimists._ Their vision is beyond the present." -- Rudy Giuliani