Daily Tech Digest - September 19, 2020

Why we need XAI, not just responsible AI

There are many techniques organisations can use to develop XAI. As well as continually teaching their system new things, they need to ensure that it is learning correct information and does not use one mistake or piece of biased information as the basis for all future analysis. Multilingual semantic searches are vital, particularly for unstructured information. They can filter out the white noise and minimise the risk of seeing the same risk or opportunity multiple times. Organisations should also add a human element to their AI, particularly if building a watch list. If a system automatically red flags criminal convictions without scoring them for severity, a person with a speeding fine could be treated in the same way as one serving a long prison sentence. For XAI, systems should always err on the side of the positive. If a red flag is raised, the AI system should not give a flat ‘no’ but should raise an alert for checking by a human. Finally, even the best AI system should generate a few mistakes. Performance should be an eight out of ten, never a ten, or it becomes impossible to trust that the system is working properly. Mistakes can be addressed, and performance continually tweaked, but there will never be perfection.


What classic software developers need to know about quantum computing

There are many different parts of quantum that are exciting to study. One is quantum computing using quantum to do any sort of information processing, the other is communication itself. And maybe the third part that doesn't get as much media attention but should is sensing, using quantum computers to sense things much more sensitively than you would classically. So think about sensing very small magnetic fields for example. So the communication aspect of it is just as important because at the end of the day it's important to have secure communication between your quantum computers as well. So this is something exciting to look forward to. ... So the first tool that you need, and one of the most important tools is the one that gives you access to the quantum computers. So if you go to quantum-computing.ibm.com and create an account there, we give you immediate access to several quantum computers, which first of all, every time I say, this just blows my mind because four years ago this wasn't a thing. You couldn't go online and access a quantum computer. I was in grad school because I wanted to do quantum research and needed access to a lab to do this work


Why Darknet Markets Persist

"There are two main reasons here: the lack of alternatives and the ease of use of marketplaces," researchers at the Photon Research Team at digital risk protection firm Digital Shadows tell Information Security Media Group. At least for English-speaking users, such considerations often appear to trump other options, which include encrypted messaging apps as well as forums devoted to cybercrime or hacking. And many users continue to rely on markets despite the threat of exit scams, getting scammed by sellers or getting identified and arrested by police. Another option is Russian-language cybercrime forums, which continue to thrive, with many hosting high-value items. But researchers say that, even when armed with translation software, English speakers often have difficulty coping with Russian cybercrime argot. Many Russian speakers also refuse to do business with anyone from the West. ... Demand for new English-language cybercrime markets continues to be high because so many existing markets get disrupted by law enforcement agencies or have administrators who run an exit scam. Before Empire, other markets that closed after their admins "exit scammed" have included BitBazaar in August, Apollon in March and Nightmare in August 2019.


Open Data Institute explores diverse range of data governance structures

The involvement of different kinds of stakeholders in any particular institution also has an effect on what kinds of governance structures would be appropriate, as different incentives are needed to motivate different actors to behave as responsible and ethical stewards of the data. In the context of the private sector, for example, enterprises that would normally adopt a cut-throat, competitive mindset need to be incentivised for collaboration. Meanwhile, cash-strapped third-sector organisations, such as charities and non-governmental organisations (NGOs), need more financial backing to realise the potential benefits of data institutions. “Many [private sector] organisations are well-versed in stewarding data for their own benefit, so part of the challenge here is for existing data institutions in the private sector to steward it in ways that unlock value for other actors, whether that’s economic value from say a competition point of view, but then also from a societal point of view,” said Hardinges. “Getting organisations to consider themselves data institutions, and in ways that unlock public value from private data, is a really important part of it.”


5 supply chain cybersecurity risks and best practices

Falling prey to the "it couldn't happen to us" mentality is a big mistake. But despite clear evidence that supply chain cyber attacks are on the rise, some leaders aren't facing that reality, even if they do understand techniques to build supply chain resilience more broadly. One of the biggest supply chain challenges is leaders thinking they're not going to be hacked, said Jorge Rey, the principal in charge of information security and compliance for services at Kaufman Rossin, a CPA and advisory firm in Miami. To fully address supply chain cybersecurity, supply chain leaders must realize they need to face the risk reality. The supply chain is veritable smorgasbord of exploit opportunities -- there are so many information and product handoffs in even a simpler one -- and each handoff represents risks, especially where digital technology is involved but easily overlooked. ... Supply chain cyber attacks are carried out with different goals in mind -- from ransom to sabotage to theft of intellectual property, Atwood said. These cyberattacks can also take many forms, such as hijacking software updates and injecting malicious code into legitimate software, as well as targeting IT and operational technology and hitting every domain and any node, Atwood said.


Moving Toward Smarter Data: Graph Databases and Machine Learning

Data plays a significant role in machine learning, and formatting it in ways that a machine learning algorithm can train on is imperative. Data pipelines were created to address this. A data pipeline is a process through which raw data is extracted from the database (or other data sources), is transformed, and is then loaded into a form that a machine learning algorithm can train and test on. Connected features are those features that are inherent in the topology of the graph. For example, how many edges (i.e., relationships) to other nodes does a specific node have? If many nodes are close together in the graph, a community of nodes may exist there. Some nodes will be part of that community while others may not. If a specific node has many outgoing relationships, that node’s influence on other nodes could be higher, given the right domain and context. Like other features being extracted from the data and used for training and testing, connected features can be extracted by doing a custom query based on the understanding of the problem space. However, given that these patterns can be generalized for all graphs, unsupervised algorithms have been created that extract key information about the topology of your graph data and used as features for training your model.


Dark Side of AI: How to Make Artificial Intelligence Trustworthy

Malicious inputs to AI models can come in the form of adversarial AI, manipulated digital inputs or malicious physical inputs. Adversarial AI may come in the form of socially engineering humans using an AI-generated voice, which can be used for any type of crime and considered a “new” form of phishing. For example, in March of last year, criminals used AI synthetic voice to impersonate a CEO’s voice and demand a fraudulent transfer of $243,000 to their own accounts. Query attacks involve criminals sending queries to organizations’ AI models to figure out how it's working and may come in the form of a black box or white box. Specifically, a black box query attack determines the uncommon, perturbated inputs to use for a desired output, such as financial gain or avoiding detection. Some academics have been able to fool leading translation models by manipulating the output, resulting in an incorrect translation. A white box query attack regenerates a training dataset to reproduce a similar model, which might result in valuable data being stolen. An example of such was when a voice recognition vendor fell victim to a new, foreign vendor counterfeiting their technology and then selling it, which resulted in the foreign vendor being able to capture market share based on stolen IP.


DDoS attacks rise in intensity, sophistication and volume

The total number of attacks increased by over two and a half times during January through June of 2020 compared to the same period in 2019. The increase was felt across all size categories, with the biggest growth happening at opposite ends of the scale – the number of attacks sized 100 Gbps and above grew a whopping 275% and the number of very small attacks, sized 5 Gbps and below, increased by more than 200%. Overall, small attacks sized 5 Gbps and below represented 70% of all attacks mitigated between January and June of 2020. “While large volumetric attacks capture attention and headlines, bad actors increasingly recognise the value of striking at low enough volume to bypass the traffic thresholds that would trigger mitigation to degrade performance or precision target vulnerable infrastructure like a VPN,” said Michael Kaczmarek, Neustar VP of Security Products. “These shifts put every organization with an internet presence at risk of a DDoS attack – a threat that is particularly critical with global workforces reliant on VPNs for remote login. VPN servers are often left vulnerable, making it simple for cybercriminals to take an entire workforce offline with a targeted DDoS attack.”


Group Privacy and Data Trusts: A New Frontier for Data Governance?

The concept of collective privacy shifts the focus from an individual controlling their privacy rights, to a group or a community having data rights as a whole. In the age of Big Data analytics, the NPD Report does well to discuss the risks of collective privacy harms to groups of people or communities. It is essential to look beyond traditional notions of privacy centered around an individual, as Big Data analytical tools rarely focus on individuals, but on drawing insights at the group level, or on “the crowd” of technology users. In a revealing example from 2013, data processors who accessed New York City’s taxi trip data (including trip dates and times) were able to infer with a degree of accuracy whether a taxi driver was a devout Muslim or not, even though data on the taxi licenses and medallion numbers had been anonymised. Data processors linked pauses in taxi trips with adherence to regularly timed prayer timings to arrive at their conclusion. Such findings and classifications may result in heightened surveillance or discrimination for such groups or communities as a whole. .... It might be in the interest of such a community to keep details about their ailment and residence private, as even anonymised data pointing to their general whereabouts could lead to harassment and the violation of their privacy.


Analysis: Online Attacks Hit Education Sector Worldwide

The U.S. faces a rise in distributed denial-of-service attacks, while Europe is seeing an increase in information disclosures attempts - many of them resulting from ransomware incidents, the researchers say. Meanwhile, in Asia, cybercriminals are taking advantage of vulnerabilities in the IT systems that support schools and universities to wage a variety of attacks. DDoS and other attacks are surging because threat actors see an opportunity to disrupt schools resuming online education and potentially earn a ransom for ending an attack, according to Check Point and other security researchers. "Distributed denial-of-service attacks are on the rise and a major cause of network downtime," the new Check Point report notes. "Whether executed by hacktivists to draw attention to a cause, fraudsters trying to illegally obtain data or funds or a result of geopolitical events, DDoS attacks are a destructive cyber weapon. Beyond education and research, organizations from across all sectors face such attacks daily." In the U.S., the Cybersecurity and Infrastructure Security Agency has warned of an increase in targeted DDoS attacks against financial organizations and government agencies



Quote for the day:

"One of the most sincere forms of respect is actually listening to what another has to say." -- Bryant H. McGill

Daily Tech Digest - September 18, 2020

Windows 10 upgrades are rarely useful, say IT admins

There is a disconnect between Microsoft's efforts and expectations – months of development time and testing to produce features and functionality that customers will clamor for – and the reaction by, in electioneering terms, a landslide-sized majority of those customers. In many cases, IT admins simply shrug at what Microsoft trumpets. "I understand the concept of WaaS, and the ability to upgrade the OS without a wipe/re-install is a good concept," one of those polled said. "((But)) let's concentrate more on useful features, like an upgraded File Explorer, a Start menu that always works, and context-sensitive (and useful) help, and less on, 'It's time to release a new feature update, whether it has any useful new features or not.'" Some were considerably harsher in taking feature upgrades to task. "Don't have a clue why they think some of the new features might be worth our time, or even theirs," said another of those polled. And others decried what they saw as wasted opportunities. "It's mostly bells, whistles and window-dressing," one IT admin said. "It seems like no fundamental problems are tackled. Although updates DO every now and then cause new problems in fundamental functionality. Looks like there's at least some scratching done on the fundamental surface – ((but)) without explanation."


Adaptive Architecture: A Bridge between Fashion and Technology

Conceptually, IT borrowed a lot of themes from Civil Engineering, one being Architecture. Despite the 3000 years that separate both areas, Architecture & Software Architecture share similar words through the multiple definitions that they have, such as "structure", "components", and "environment". At first, that relationship was really strong because the technology was "more concrete", heavier, and, obviously, slower. Everything was super difficult to change and applications used to survive without an update for quite a long time. But, as computers advance, the world is submerged in a massive flow of information on digital platforms and customers can directly connect to businesses through these channels, existing conditions that demand companies to be able to push reliable modifications to their websites, or applications, every day, or even multiple times throughout the day. This progress didn't happen overnight, and as digital evolved, the technical landscape started to change, reflecting new requirements and problems. In 2001, an initiative to understand these obstacles to develop software, obstacles still relevant to this day, seventeen people gathered in the Wasatch mountains of Utah. From that reunion, "The Agile Manifesto" was created, a declaration based on four key values and 12 principles, establishing a mindset called "Agile".


Deep Dive into OWIN Katana

OWIN stands for Open Web Interface for .NET. OWIN is a open standard specification which defines a standard interface between .NET web servers and web applications. The aim is to provide a standard interface which is simple, pluggable and lightweight. OWIN is motivated by the development of web frameworks in other coding languages such Node.js for JavaScript, Rack for Ruby, and WSGI for Phyton. All these web frameworks are designed to be fast, simple and they enable the development of web applications in a modular way. In contrast, prior to OWIN, every .NET web application required a dependency on >System.Web.dll, which tightly coupled with Microsoft's IIS (Internet Information Services). This meant that .NET web applications came with a number of application component stacks from IIS, whether they were actually required or not. This made .NET web applications, as a whole, heavier and they performed slower than their counterparts in other coding languages in many benchmarks OWIN was initiated by members of Microsoft's communities; such as C#, F# and dynamic programming communities. Thus, the specification is largely influenced by the programming paradigm of those communities.


Banking on digitalisation: A transformation journey enabled by technology, powered by humans

Banks are now staring at the massive challenge of continuing their digital investments in a cost constrained environment. Getting their workforce ready to develop the technologies, while continuing to deliver value to their customers is another issue. At the same time, they are competing with new digital banks that will undoubtedly come in with newer technology built on modern architecture without the legacy debt. However, there are industry players that may have cracked the code to successful digitalisation. I know of incumbent banks as well as digital banks developing world-class digital capabilities at lower costs, while training their people to make full use of their new digital investments. Recently the finance function of a leading global universal bank adopted a “citizen-led” digital transformation, training 300+ “citizen” developers who identified 200+ new use cases resulting in an annual run rate cost reduction of $15 million. This case study highlights the importance of engaging and upskilling your workforce while contributing to bottom line benefits. Over the last two decades, technology by itself has evolved and now has the ability to transform whole businesses in the financial services sector, similar to its impact on other industries such as retail and media. Traditionally, for banks, technology was a support function enabling product and customer strategies.


Google details RigL algorithm for building more efficient neural networks

Google researchers put RigL to the test in an experiment involving an image processing model. It was given the task of analyzing images containing different characters. During the model training phase, RigL determined that the AI only needs to analyze the character in the foreground of each image and can skip processing the background pixels, which don’t contain any useful information. The algorithm then removed connections used for processing background pixels and added new, more efficient ones in their places.  “The algorithm identifies which neurons should be active during training, which helps the optimization process to utilize the most relevant connections and results in better sparse solutions,” Google research engineers Utku Evci and Pablo Samuel Castro explained in a blog post. “At regularly spaced intervals we remove a fraction of the connections.” There are other methods besides RigL that attempt to compress neural networks by removing redundant connections. However, those methods have the downside of significantly reducing the compressed model’s accuracy, which limits their practical application. Google says RigL achieves higher accuracy than three of the most sophisticated alternative techniques while also “consistently requiring fewer FLOPs (and memory footprint) than the other methods.”


IBM, AI And The Battle For Cybersecurity

While older adversarial attack patterns were algorithmic and easier to detect, new attacks add AI features such as natural language processing and a more natural human computer interaction to make malware more evasive, pervasive and scalable. The malware will use AI to keep changing form in order to be more evasive and fool common detection techniques and rules. Automated techniques can make the malware more scalable and combined with AI can move laterally through an enterprise and attack targets without human intervention. The use of AI in cybersecurity attacks will likely become more pervasive. Better spam can be crafted that avoids detection or personalized to a specific target as a form of spear phishing attack by using natural language processing to craft more human like messages. In addition, malware can be smart enough to understand when it is in a honeypot or sandbox and will avoid malicious execution to look more benign and not tip off security defenses. Adversarial AI attacks the human element with the use of AI augmented chatbots to disguise the attack with human-like emulation. This can escalate to the point where AI powered voice synthesis can fool people into believing that they’re dealing with a real human within their organization.


'We built two data centers in the middle of the pandemic'

With a substantial proportion of chips and components coming from the Wuhan region in China, supply chains were already facing delays. After negotiation with suppliers, Harvey's team managed to procure the right equipment on time, air-freighting components to the island from the UK mainland instead of using ferry services as usual. As the state of Guernsey started restricting travel, a local Agilisys team was then designated to pick up the data centers' build. The team's head of IT services Shona Leavey remembers juggling the requirements for the build, while also setting up civil servants with laptops to make sure the state could continue to deliver public services, even remotely. "We were rolling out Teams to civil servants, and at the same time had some of the team working on the actual data center build," Leavey tells ZDNet. "Any concept of a typical nine-to-five went out the window." Given the timeline for the build, it became evident that some engineers would have to go into the data centers to set up the equipment during the early months of summer. That meant the Agilisys team started a long, thorough, health and safety assessment.


Deepfake Detection Poses Problematic Technology Race

The problem is well known among researchers. Take Microsoft's Sept. 1 announcement of a tool designed to help detect deepfake videos. The Microsoft Video Authenticator detects possible deepfakes by finding the boundary between inserted images and the original video, providing a score for the video as it plays. While the technology is being released as a way to detect issues during the election cycle, Microsoft warned that disinformation groups will quickly adapt. "The fact that [the images are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology," said Tom Burt, corporate vice president of customer security and trust, and Eric Horvitz, chief scientific officer, in a blog post describing the technology. "However, in the short run, such as the upcoming US election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes." Microsoft is not alone in considering current deepfake detection technology as a temporary fix. In its Deep Fake Detection Challenge (DFC) in early summer, Facebook found the winning algorithm only accurately detected fake videos about two-thirds of the time.


Deliver Faster by Killing the Test Column

Instead of testers simply picking work out of this column and working on it till it’s done, they should work with the team to help them understand how they approach testing, the types of things they are looking for and also finding during testing. Doing this with a handful of tasks is likely to help them identify some key themes within their work. For example, are there similar root causes such as usability or accessibility issues, or some hardware/software combination that always results in a bug? Is there something the devs could look out for while making the changes? These themes can be used to create a backlog of tasks that the team can begin to tackle to see if they can be addressed earlier on in the development life cycle. By focusing on the process and not the people, it makes it easier to talk about what testers are doing, how developers and testers could mitigate this work earlier on in the life cycle, and begins to be the seeds of the continuous improvement programme. Leadership in this process is very important. Leaders need to help testers feel comfortable that they are not being targeted as the "problem" within the team, but are actually the solution in educating the team in what risks they are looking for when testing.


Mitigating Cyber-Risk While We're (Still) Working from Home

At home, most folks use a router provided by their Internet service provider. The home router has a firewall and NAT functionality so your family can safely connect out to your favorite websites, and those websites can send the data you asked for back to you. However, with most employees now working at home, enterprise-grade firewalls at the edge of corporate networks are no longer protecting them or providing the needed visibility for IT to help keep the corporate users safe. That's where having an endpoint security solution that can provide visibility, segment and limit access between different internal networks and laptop devices can come in handy. With CISOs, government employees, and business executives sharing home networks with their 15-year-old gamers and TikTok addicts, it's imperative to extend the principles of least privilege to the systems with important data inside the home network. Meaning that even if a bad actor gains access to your kid's network, your laptop and organization's internal assets stay in the clear. When it comes to proactively protecting against cyber threats, segmentation is one of the best ways to ensure that bad actors stay contained when they breach the perimeter. Because, let's be honest, it's bound to happen.



Quote for the day:

"Challenges are what make life interesting and overcoming them is what makes life meaningful." --Joshua Marine

Daily Tech Digest - September 17, 2020

Outbound Email Errors Cause 93% Increase in Breaches

Egress CEO Tony Pepper said the problem is only going to get worse with increased remote working and higher email volumes, which create prime conditions for outbound email data breaches of a type that traditional DLP tools simply cannot handle. “Instead, organizations need intelligent technologies, like machine learning, to create a contextual understanding of individual users that spots errors such as wrong recipients, incorrect file attachments or responses to phishing emails, and alerts the user before they make a mistake,” he said. The most common breach types were replying to spear-phishing emails (80%), emails sent to the wrong recipients (80%) and sending the incorrect file attachment (80%). Speaking to Infosecurity, Egress VP of corporate marketing Dan Hoy, said businesses reported an increase in outbound emails since lockdown, “and more emails mean more risk.” He called this a numbers game which has increased risk as remote workers are more susceptible and likely to make mistakes the more they are removed from security and IT teams. According to the research, 76% of breaches were caused by “intentional exfiltration.” Hoy confirmed this is a combination of employees innocently trying to do their job and not cause harm by sending files to webmail accounts, but this does increase risk “and you cannot ignore the malicious intent.”


‘The demand for cloud computing & cybersecurity professionals is on the rise’

The COVID-19 pandemic undoubtedly has disrupted the normalcy of every company across every sector. At Clumio, our primary focus continues to be the health and well-being of our people. While tackling the situation, we also need to keep pace with our professional duties. We made the transition to remote work immediately and are in constant touch with our employees to ensure they don’t feel isolated and remain focused on their work. We are encouraging employees to follow the best practices of remote work and motivating them to spend time on their emotional, mental and physical wellbeing during this time. We conduct Zoom happy hours frequently to stay connected and have fun. As part of the session, we also celebrated a virtual babyshower of one of our colleagues recently. We had our annual summer picnic and created wonderful memories while maintaining social distance, but staying together. During this time, we have also launched the India Research and Development center in Bangalore. Our India Center will drive front-end innovation and research to build cloud solutions. India has a huge talent pool in technology, and it is only growing. We have also started virtual hiring and onboarding during the pandemic. 


AI investment to increase but challenges remain around delivering ROI

ROI on AI is still a work in progress that requires a focus on strategic change. As companies progress in AI use, they often shift their focus from automating internal employee and customer processes to delivering on strategic goals. For example, 31% of AI leaders report increased revenue, 22% greater market share, 22% new products and services, 21% faster time-to-market, 21% global expansion, 19% creation of new business models, and 14% higher shareholder value. In fact, the AI-enabled functions showing the highest returns are all fundamental to rethinking business strategies for a digital-first world: strategic planning, supply chain management, product development, and distribution and logistics. The study found that automakers are at the forefront of AI excellence, as they accelerate AI adoption to deliver on every part of their business strategy, from upgrading production processes and improving safety features to developing self-driving cars. Of the 12 industries benchmarked in the study, automotive employs the largest AI teams. With the government actively supporting AI under its Society 5.0 program, Japanese companies lead the pack in AI adoption. 


The future of .NET Standard

.NET 5 and all future versions will always support .NET Standard 2.1 and earlier. The only reason to retarget from .NET Standard to .NET 5 is to gain access to more runtime features, language features, or APIs. So, you can think of .NET 5 as .NET Standard vNext. What about new code? Should you still start with .NET Standard 2.0 or should you go straight to .NET 5? It depends. App components: If you’re using libraries to break down your application into several components, my recommendation is to use netX.Y where X.Y is the lowest number of .NET that your application (or applications) are targeting. For simplicity, you probably want all projects that make up your application to be on the same version of .NET because it means you can assume the same BCL features everywhere. Reusable libraries: If you’re building reusable libraries that you plan on shipping on NuGet, you’ll want to consider the trade-off between reach and available feature set. .NET Standard 2.0 is the highest version of .NET Standard that is supported by .NET Framework, so it will give you the most reach, while also giving you a fairly large feature set to work with. We’d generally recommend against targeting .NET Standard 1.x as it’s not worth the hassle anymore. 


Fintech sector faces "existential crisis" says McKinsey

After growing more than 25% a year since 2014, investment into the sector dropped by 11% globally and 30% in Europe in the first half of 2020, says McKinsey, citing figures from Dealroom. In July 2020, after months of Covid-19-related lockdowns in most European countries, the drop was even steeper, 18% globally and 44% in Europe, versus the previous year. "This constitutes a significant challenge for fintechs, many of which are still not profitable and have a continuous need for capital as they complete their innovation cycle: attracting new customers, refining propositions and ultimately monetizing their scale to turn a profit," states the McKinsey paper. "The Covid-19 crisis has in effect shortened the runway for many fintechs, posing an existential threat to the sector." Analyzing fundraising data for the last three years from Dealroom, the conulstancy found that as much as €5.7 billion will be needed to sustain the EU fintech sector through the second half of 2021 — a point at which some sort of economic normalcy might begin to emerge. It is not clear where these funds will come from, however. Fintechs are largely unable to access loan bailout schemes due to their pre-profit status.


Artificial Intuition: A New Generation of AI

Artificial intuition is a simple term to misjudge in light of the fact that it seems like artificial emotion and artificial empathy. Nonetheless, it varies fundamentally. Experts are taking a shot at artificial emotions so machines can mirror human behavior all the more precisely. Artificial empathy aims to distinguish a human’s perspective in real-time. Along these lines, for instance, chatbots, virtual assistants and care robots can react to people all the more properly in context. Artificial intuition is more similar to human impulse since it can quickly survey the entirety of a circumstance, including extremely inconspicuous markers of explicit movement. The fourth era of AI is artificial intuition, which empowers computers to discover threats and opportunities without being determined what to search for, similarly as human instinct permits us to settle on choices without explicitly being told on how to do so. It’s like a seasoned detective who can enter a wrongdoing scene and know immediately that something doesn’t appear to be correct or an experienced investor who can spot a coming pattern before any other person.


Attacked by ransomware? Five steps to recovery

Arguably the most challenging step for recovering from a ransomware attack is the initial awareness that something is wrong. It’s also one of the most crucial. The sooner you can detect the ransomware attack, the less data may be affected. This directly impacts how much time it will take to recover your environment. Ransomware is designed to be very hard to detect. When you see the ransom note, it may have already inflicted damage across the entire environment. Having a cybersecurity solution that can identify unusual behavior, such as abnormal file sharing, can help quickly isolate a ransomware infection and stop it before it spreads further. Abnormal file behavior detection is one of the most effective means of detecting a ransomware attack and presents with the fewest false positives when compared to signature based or network traffic-based detection. One additional method to detect a ransomware attack is to use a “signature-based” approach. The issue with this method, is it requires the ransomware to be known. If the code is available, software can be trained to look for that code. This is not recommended, however, because sophisticated attacks are using new, previously unknown forms of ransomware. 


Struggling to Secure Remote IT? 3 Lessons from the Office

To prepare for the arrival of CCPA, business leaders told us they spent an average of $81.9 million on compliance during the last 12 months. Yet despite making investments in hiring (93%), workforce training (89%), and purchasing new software or services to ensure compliance (95%), 40% still felt unprepared for the evolving regulatory landscape. Why? Because the root causes were not addressed. Perhaps their IT operations and security teams worked in silos, creating complexity and narrowing their visibility into their IT estates. Maybe their teams were completely unaware that other departments introduced their own software into the environment. Or more commonly, the organization used legacy tooling that wasn't plugged into the endpoint management or security systems of the IT teams. These are just some of the root causes that keep organizations in the dark and prone to exploits. While the transition to remote work was swift, it has presented businesses with an opportunity to face these issues head-on. As workforces continue to work remotely, CISOs and CIOs now have the chance to evaluate how they effectively manage risk in the long term, which includes running continuous risk assessments and investing in solutions that deliver rapid incident response and improved decision-making.


CTO challenges around the return to the workplace

Every CTO tells us that the digital transformation and change management programmes designed to address the relentless regulatory, competitor, innovation and customer challenges must go ahead as planned, regardless of the pandemic. You may be tackling automating end-to-end electronic trading workflows or creating mobile framework applications. Whatever the focus, hampering the journey towards electronification, firms stumble over the limitations of legacy systems; trading desks still depend on quotes, orders and trades are processed from a multitude of external trading platforms, and inconsistency, lag and gaps all result in costly errors, which are missed opportunities at best, and regulatory reporting breaches and huge fines at worst. In the quest for efficiencies, mitigation of risk, and achieving seamless and future-proofed IT architecture, firms must automate to meet their regulatory obligations and deliver client, management and regulatory transparency. And this hasn’t even touched on achieving an ambition to create end-to-end, freely flowing models of perfectly clean, ordered and well-governed data. Every CTO needs to apply extraction and visualisation layers, and mine the data for valuable insights that can be fed further upstream.


The Case for Explainable AI (XAI)

Despite the numerous benefits to developing XAI, many formidable challenges persist. A significant hurdle, particularly for those attempting to establish standards and regulations, is the fact that different users will require different levels of explainability in different contexts. Models that are deployed to effectuate decisions that directly impact human life, such as those in hospitals or military environments, will produce different needs and constraints than ones utilized in low-risk situations There are also nuances within the performance-explainability trade-off. Infrastructure and systems designers are constantly balancing the demands of competing interests. ... There are also a number of risks associated with explainable AI. Systems that produce seemingly-credible but actually-incorrect results would be difficult to detect for most consumers. Trust in AI systems can enable deception by way of those very AI systems, especially when stakeholders provide features that purport to offer explainability where they actually do not. Engineers also worry that explainability could give rise to vaster opportunities for exploitation by malicious actors. Simply put, if it is easier to understand how a model converts input into output, it is likely also easier to craft adversarial inputs that are designed to achieve specific outputs.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - September 16, 2020

As The Cloud Transforms Enterprises, Cybersecurity Growth Soars

Moving beyond password vaults and implementing cloud-based Identity Access Management (IAM) and Privileged Access Management (PAM) solutions are proving to be adaptive enough to keep up with cloud-based transformation projects being fast-tracked to completion before the end of the year. Leaders in cloud-based IAM and PAM include Centrify, who helps customers consolidate identities, deliver cross-platform, least privilege access and control shared accounts while securing remote access and auditing all privileged sessions. Enterprise spending on cybersecurity continues to grow despite the pandemic, driven by the compelling cost and time-to-market advantages cloud-based applications provide. Gartner predicts end-user spending for the information security and risk management market will grow at a compound annual growth rate of 8.2% between 2019 and 2024, becoming a $207.7B market in four years. CIOs and their teams say the pace and scale of cloud transformation in the latter half of 2020 make cloud-based IAM and PAM a must-have to keep their cloud transformation initiatives on track and secure.


Azure + Spring Boot = Serverless - Q&A with Julien Dubois

With “lift and shift”, one of the troubles you will have is that you will need to port all your Spring MVC Controllers to become Spring Cloud functions. There are different ways and tricks to achieve this, and you will have some limitations, but that should work without spending too much money. Also, you wouldn’t touch your business code too much, so nothing important should break. Then, an Azure Function works in a very different way from a classical Spring Boot application: you can still benefit from a Hibernate second-level cache or from a database connection pool, but you can easily understand that they will not be as efficient, as their time to live will be much lower. Using a distributed cache will be a huge trouble here. Also, your function can scale up a lot better than your one-node Tomcat server, so maybe your database will not work as well as before, as it wasn’t built for that sort of load. You could use instead of a database like CosmosDB, or a caching solution like Redis, which are two very widely used options on Azure. This is a lot more work, but that’s the only way to get all the benefits from a serverless platform.


The power of feelings at work

Articulating feelings can also bring new understanding to apparent insubordination or poor performance. For example, our firm was once engaged to help an oil refinery stymied by a total lack of response from its employees when it tried to elicit ideas for a cost-reduction effort. To understand what was behind the disengagement, we held focus groups across functions and grades and conducted one-on-one interviews with senior executives. We sought out individuals whom others identified via a questionnaire (e.g., to whom do you go within the organization if you have a problem?) as particularly in touch with employee feelings and asked for their input on what was stopping action. Across the sessions, an articulation of feelings — frustration, fear, and hopelessness — emerged. ... Because feelings are always messengers of needs, the next step is to follow the feeling to the need. Needs are actionable and often multidimensional. Taking action from an understanding of needs sidesteps recrimination and invites compassion. Psychologist Marshall Rosenberg, founder of the Center for Nonviolent Communication, which advocates speaking from needs to navigate conflict, writes, “From the moment people begin talking about what they need, rather than what’s wrong with one another, the possibility of finding ways to meet everybody’s needs is greatly increased.”


Cybersecurity Bounces Back, but Talent Still Absent

Leave it to a global pandemic to disrupt industries many of us have assumed to be stalwart. Companies fortunate enough not to traffic in hard goods are realizing they can survive (and cut significant costs) by moving to work-from-home workforces. This shift, with an estimated 62% of the workforce now working from home, demonstrates the increased need in hiring for cybersecurity personnel required to manage these new business models. At first, this sounds great for the resilience of the cybersecurity sector — but this means the already existent skills shortage for security professionals is about to get a lot worse. The result is that the lines between what have been considered "pure" cybersecurity roles and, well, everything else are becoming blurred. A recent (ISC)² survey shows that many security professionals are being leveraged to support general IT requirements to accommodate different needs for work at home amid the pandemic. That makes sense. Companies need to have the infrastructure in place to support these new remote workers logging in from their home ISPs while also ensuring the security of sensitive data and intellectual property.


Secondary DNS - Deep Dive

Secondary DNS has many use cases across the Internet; however, traditionally, it was used as a synchronized backup for when the primary DNS server was unable to respond to queries. A more modern approach involves focusing on redundancy across many different nameservers, which in many cases broadcast the same anycasted IP address. Secondary DNS involves the unidirectional transfer of DNS zones from the primary to the Secondary DNS server(s). One primary can have any number of Secondary DNS servers that it must communicate with in order to keep track of any zone updates. A zone update is considered a change in the contents of a zone, which ultimately leads to a Start of Authority (SOA) serial number increase. The zone’s SOA serial is one of the key elements of Secondary DNS; it is how primary and secondary servers synchronize zones. Below is an example of what an SOA record might look like during a dig query. ... In addition to serial tracking, it is important to ensure that a standard protocol is used between primary and Secondary DNS server(s), to efficiently transfer the zone. DNS zone transfer protocols do not attempt to solve the confidentiality, authentication and integrity triad (CIA); however, the use of TSIG on top of the basic zone transfer protocols can provide integrity and authentication.


Leadership lessons from Accenture’s CIO

The only certain thing about technology is that what we use today is not what we will use tomorrow. In my senior team, I look for a mindset of flexibility, which not everyone has. In my interviews I assess the candidate’s willingness to rotate to the new and untried, while at the same time protecting our core systems. The second competency stems from the fact that we live in a world of publicity and social media, where people are always watching what you are doing. When you live in this world, and represent your company, you need an identifiable personal brand. I’m not talking about a being a big shiny icon and writing a lot of white papers. It’s about having the confidence in who you are and communicating, in a few simple words, your unique value. My responsibility is to help people recognize their personal brand and to use it as the foundation of their confidence to lead. We put out a capability called ALICE (Accenture Legal Intelligent Contract Exploration), which uses applied intelligence and machine learning for natural language processing across all of our contracts. 


TOGAF and the history of Enterprise Architecture

TOGAF assumes it will be used by companies with many departments and that are hierarchical in terms of structure and decision-making. Thus, the framework maps directly onto these types of organizational structures. For example, a whole chapter is devoted to describing how to establish and operate an Architectural Board. An Architectural Board is a central planning committee that oversees the design and implementation of the Enterprise Architecture. There're chapters about Risk Management, Architecture Partitioning, and Architectural Contracts. Remember, as mentioned earlier, TOGAF has 52 chapters. It's exhaustive. However, while TOGAF is highly structured, it can accommodate other methodologies, such as Agile and DevOps. TOGAF understands it's best for companies that are highly structured organizations, but it also understands that no two companies are alike. TOGAF supports versatility. TOGAF accepts the iterative model, which is a critical component of Agile. In fact, there is a chapter in TOGAF titled "Applying Iteration to the ADM" that discusses the nature and implementation of iteration in the Application Development Model. Also, DevOps can be accommodated by TOGAF.


Renault’s blockchain project aims at vehicle compliance

Deployed in collaboration with IBM, XCEED is based on ‘Hyperledger Fabric’ blockchain technology. It is designed to track and certify the compliance of components and sub-components. The tool looks to enable greater responsiveness and efficiency at a time of what Renault believes is ‘ever-greater regulatory stringency.’ Here, the carmaker is referring to the European Parliament’s new market surveillance regulations which came into effect at the start of September. These rules mean enhanced controls for vehicles already on the market. Therefore, the production chain must adjust its structure to respond to regulations within shorter timeframes. With XCEED, blockchain is used to create a trusted network for sharing compliance information between manufacturers. Each party maintains control and confidentiality over its data, without compromising its integrity, while also increasing security and confidentiality. Testing was conducted at Renault’s Douai plant, with over one million documents archived and a speed of 500 transactions per second.


Blockchain in Mortgages – Adopting the New Kid on the Block

The world is fast progressing within the technology domain and digital services have penetrated to the unexplored corners of life. Digital transformation has become the backbone of companies and industries throughout the planet. Traditional banking and lending which were thriving a couple of years back, are dying a quick death. Post the pandemic the banks and other financial institutions have increased the push on digital channels for acquisition and customer servicing. Agile and scalable lending platforms are cutting acquisition costs and convey in growth in loan volumes. India’s eleven largest banks namely ICICI Bank Ltd, Axis Bank Ltd, Kotak Mahindra Bank, HDFC Bank, Yes Bank, Standard Chartered Bank, RBL Bank, South Indian Bank, etc. have come together to form a consortium for leveraging the blockchain technology. Blockchain solutions will help increase operational efficiency, faster TAT, enhanced customer delight, increase transparency, and authenticity of transactions. Banks are going to be ready to make better credit decisions thanks to the globally shared and distributed network and increased communication between banks. Blockchain will reduce fraud and data manipulation and can deliver better portfolios.


How Microsoft is looking to MetaOS to make Microsoft 365 a 'whole life' experience

Microsoft's highest level MetaOS pitch is that it is focused on people and not tied to specific devices. Microsoft seems to be modeling itself a bit after Tencent's WeChat mobile social/payment app/service here, my sources say. Microsoft wants to create a single mobile platform that provides a consistent set of work and play services, including messaging, voice and video, digital payments, gaming, and customized document and news feeds. The MetaOS consists of a number of layers, or tiers, according to information shared with me by my contacts. At the lowest level, there's a data tier, which is implemented in the Office substrate and/or Microsoft Graph. This data tier is all about network identity and groups. There's also an application model, which includes work Microsoft is doing around Fluid Framework (its fast co-authoring and object embedding technology); Power Apps for rapid development and even the Visual Studio team for dev tools. Microsoft is creating a set of services and contracts for developers around Fluid Core, Search, personalization/recommendation, security, and management. For software vendors and customers,



Quote for the day:

"As a leader, you set the tone for your entire team. If you have a positive attitude, your team will achieve much more." -- Colin Powell

Daily Tech Digest - September 15, 2020

Edge computing’s epic turf war

Edge computing and the IoT — not to mention COVID-19-related production and supply chain disruptions — are already blurring the lines separating the two cultures. IoT devices bring a new level of monitoring and in some cases control over OT systems. Plus, the edge deployments to which those IoT devices connect promise a whole new arsenal of analytics capabilities to crunch the massive amounts of data being produced by OT equipment. Yet many OT organizations see edge computing as duplicative and even potentially harmful. “Selling that value proposition [to OT] is very challenging,” says Jonathan Lang, IDC research manager for its Worldwide IT/OT Convergence Strategies research practice. “They already have legacy wired connections and industrial networking capabilities and SCADA [supervisory control and data acquisition] systems and control systems that serve their needs just fine.” OT leaders also believe that integrating new systems could threaten throughput and reliability, he says. Production requirements are changing rapidly, and equipment may need to be changed very quickly. “When IT starts meddling with their equipment, that translates into a loss in productivity,” Lang says.


Use cases for AI while remote working

“Working from home presents a host of challenges; lower quality equipment, shared working spaces and limited available bandwidth to name a few. Many of these challenges mean that the quality of our audio and video communication is less than ideal. AI is regularly applied to improve the quality of our video in real time and automatically filter out distracting background noises. “Remote working can increase the feeling of being isolated or disconnected from workmates, lead to disengagement and even stress and ill health. Through sentiment and interaction analysis of everyday tools like email, messenger and other collaboration applications, AI systems can provide effective means of measuring employee wellbeing and engagement, highlighting where employees need help. Once problems are identified, AI systems can suggest support resources and activities that can help to make things better.” Lastly, an emerging AI-related method of maintaining operations during remote working has been the use of digital workers. “Digital workers are becoming essential for businesses, optimising something we’ve never been able to before: the bandwidth of employees,” said Ivan Yamshchikov, AI evangelist at ABBYY.


Eliminating Unconscious Bias in Tech Recruitment

First, it is important to realize that where you hire from is as important as how you hire. If you only ever recruit from the same schools or with the same websites, then you are leaving yourself at the mercy of the diversity of those institutions. If you only ever use one route to apply, whether that’s only using recruitment consultants or use a website that is only really accessible through a computer, then again you are limiting yourself to candidates with access to those pathways. Outside of reconsidering talent pools, one way to reduce unconscious bias is through the removal of identifying characteristics—no photos, no names, dates of birth or school and university grades. This can help to an extent, in that decision-makers have to go off the candidates’ experience and how they’ve presented their previous roles, but it does have its restrictions—with entry-level positions, for example, where relevant experience might be limited. That’s a quick-fix solution. For a more sustainable, thorough and long-term approach, recruiters—particularly those hiring for technology positions—need to look at how they can accurately judge skill sets. This can be challenging in itself—in larger organizations, those involved in early stage selection may not have the requisite background understanding to make the right choices.


Ad Fraud: The Multibillion-Dollar Cybercrime CISOs Might Overlook

The practice became significantly more widespread when the scammers began leveraging networked bots to create fake clicks on sites they own or ads they've paid for, and now also encompasses hidden ads, targeting ad networks which measure views not clicks; click hijacking, when the fraudster redirects a click from one ad to another; and fake apps, which look like and are labeled as legitimate apps. These techniques are often used simultaneously to victimize companies, making the fight against ad fraud even more complex, says Luke Taylor, the chief operating officer of adtech security company TrafficGuard, which coauthored the report with Juniper. So Tayler believes that at the very least, CISOs should use lessons from the cybersecurity world to encourage their employers to become more engaged with the ad fraud challenge. A lot of ad fraud is based on making fake traffic look real, and the way that fraudsters do that is by stealing traffic logs to mimic them and create authentic-looking but fake traffic. CISOs, Taylor says, should be protecting their logs from cybercriminals the way they protect financial data.


Tech Conference Diversity: Time to Get Real

Some tech conferences provide non-traditional amenities such as sessions geared towards women (72%), a mothers' room (56%), a conference hosted meetup (28%), on-site daycare (17%) or a childcare stipend (11%). According to Classon, childcare stipends tend to be offered to people to encourage attendance, although they should also be offered to speakers who have not been offered a speaker stipend. "Part of the industry's problem is that organizers look at providing amenities, like a designated mothers' room or childcare stipend, as an extra bonus of their event when these should have been looked at as table stakes to level the playing field for more women," said Classon. "The same argument can be made for religious observances and the need for designated spaces for worship at weeklong or multi-day conferences." Tech conferences also tend to suffer from design bias as evidenced by the use of stools and chairs on stage that can make wearing a dress or skirt uncomfortable for the speaker and the audience. "Replacing bar stools with chairs that are lower to the ground makes it more comfortable for everybody, frankly," said Classon. "Organizers should also consider swapping out the common clip-on microphone that is difficult to attach to women's clothing for a headset that can rest behind the speaker's ear."


Portland becomes first city to ban companies from using facial recognition software in public

"This is the first of its kind legislation in the nation, and I believe in the world. This is truly a historic day for the city of Portland." Debate over facial recognition software continues to rage after a summer of high-profile news related to the technology. Amazon, IBM, and other major tech companies decided in June to put a moratorium on all police department use of facial recognition software after years of studies showing almost all of the available tools have high error rates and specifically cannot identify the faces of people with darker skin. Later that same month, the ACLU revealed that it was representing a man from Detroit who was arrested in front of his wife and kids based on a mistake by the Detroit Police Department's facial recognition software. Just a day later, US Senators Ed Markey and Jeff Merkley announced the Facial Recognition and Biometric Technology Moratorium Act, which they said resulted from a growing body of research that "points to systematic inaccuracy and bias issues in biometric technologies which pose disproportionate risks to non-white individuals." As with the other recent developments related to facial recognition, experts both praised and criticized the Portland ban.


Wi-Fi 6 explained: Speed, range, latency, frequency and security

Wi-Fi 6 technology enables the fastest wireless networks to date, with theoretical maximum speeds of around 10 Gbps versus Wi-Fi 5 Wave 2's peak data rates of around 7 Gbps. Experts caution, however, that max wireless speeds are typically unattainable except in perfect, laboratory-like conditions. That's why Wi-Fi 5 -- advertised as "gigabit wireless" -- failed to smash the gigabit barrier in actual practice, according to Zeus Kerravala, founder and principal analyst of ZK Research. "If you were sitting at your desk by yourself and no one else was attached to the network, you might have gotten a gigabit of connectivity from Wave 2," Kerravala said. "But I haven't talked to any company that did." He expects Wi-Fi 6 will be the first standard to consistently exceed 1 Gbps in real-world implementations. But more important than its raw increase in throughput, experts agreed, is Wi-Fi 6 technology's efficiency gains, which result in higher capacity and lower latency overall. "Wi-Fi 6 is not as much about getting better device performance," independent analyst John Fruehe said. "It really does a better job of managing larger numbers of clients at the router level."


Open Source Security's Top Threat and What To Do About It

It was easier for organizations to understand and control their use of open source software 10 or 20 years ago, when a smaller pool of commercial open source vendors licensed their software to customers, understood everything about the code, and handled security patching. Now, however, developers draw from a massive array of smaller projects they find on GitHub or share with each other. That, after all, is the beautiful thing about open source — developers no longer have to struggle with bad tools or reinvent software wheels when they can easily benefit from the community's freely available contributions to tackle just about any development need. When they do so, however, they seldom examine what's under the hood — the source code and its dependencies. Can they really trust the code? Does the party who created it stand ready to pinpoint and disclose any security flaws? Is there even someone to contact? A single application can have 10 runtimes and 100 other packages. How can you be confident all are up to date from a security perspective? This fragmentation is the No. 1 open source security threat for enterprises, and it may help explain why Common Vulnerabilities


Why problem solving using analytics needs new thinking

There are parallels for this evolution. There was a time when building a website meant learning to write extensive lines of code. This eventually evolved to a partial self-service model via open-source software, and now the prevalence of simple drag-and-drop features allow anyone with an idea to create a personalised website. As with the development of web design, APA platforms now allow users to get to the creative stage – or the ‘thinking stage’ – sooner. It leapfrogs the mundane tasks of sourcing, cleaning and organising data. The equivalent of web design’s user-friendly drag-and-drop features are the hundreds of building blocks that jump-start the process of creating useful analytic models. Through a unified method of managing data analytics, automating business processes and elevating employees to spend their time on more strategic solving, APA reshapes the way companies generate data-driven insights and act on them. This enables upskilled employees in all parts of the business to ask hard questions and obtain swift answers without always relying upon the advanced skills of data experts. By replacing a range of cumbersome point solutions with one platform that sits across the entire analytic journey, APA also enables anyone in any organisation to build predictive models and use predictive data analytics to drive quick wins.


From Monolith to Event-Driven: Finding Seams in Your Future Architecture

The CQRS pattern strongly suggests that it is about the segregation of commands and queries, but realizing that there is a difference between 1) asking for the state of a system and 2) asking the system to change its state is more fundamental than the separation itself. In fact, you’ll find many variants of CQRS implementations ranging from logical to physical. Combining EDA with the CQRS pattern is a natural increment of the system’s design because commands are the generators of events. With CQRS and commands, the migration of data during the transitional state of an architecture provides a seam by which, once the migration is over, can be removed. This seam will be covered in more details in the Data Migration Seams section. Event sourcing a system means the treatment of events as the source of truth. In principle, until an event is made durable within the system, it cannot be processed any further. Just like an author’s story is not a story at all until it’s written, an event should not be projected, replayed, published or otherwise processed until it’s durable enough such as being persisted to a data store. Other designs where the event is secondary cannot rightfully claim to be event sourced but instead merely an event-logging system.



Quote for the day:

"Leadership does not always wear the harness of compromise." -- Woodrow Wilson