Daily Tech Digest - October 06, 2020

What is Blockchain as a Service (BaaS) in the Tech Industry?

Blockchain is becoming more and more popular not just in Cryptocurrency but in the financial transactions where security and transparency is a must. However, it is very expensive and technologically complicated to create, maintain, and operate a blockchain. That is why many smaller and mid-level companies are hesitant to invest fully in blockchain even though its advantages are obvious. However, Blockchain as a Service can easily resolve this problem. This is based on the Software as a Service (SaaS) model where a company specifically invests in creating, maintaining, and operating a blockchain. This company can then offer the advantages of blockchain to other companies as a service while charging a fee. They can offer blockchain on any of the available distributed ledgers like Ethereum, Bitcoin, R3 Corda, Hyperledger Fabric, Quorum, etc. along with the peripheral services such as system security, bandwidth management, resource optimization, etc. In this way, many smaller and mid-level companies who don’t want to build and maintain their own blockchain systems from scratch can still obtain the advantages of blockchain for a nominal fee. These companies can focus on their core business and obtain value addition from the blockchain without needing to become experts in the technology.


How companies can overcome the content processing drawbacks of RPA

While the need to enlist assistance from additional software is valid, organisations must be careful about overspending, and ensure that the tools they invest in are for a clear, specific purpose. ... “There’s a couple of different ways for customers to overcome these shortcomings. One is to buy a tailored point solution like an OCR tool, which can extract data from documents, or they could invest in a workflow tool to help them orchestrate robots and humans, or perhaps buy some machine learning from Google to try and extract insights from their complex documents. These tools are designed to solve a very narrow set of problems, within tight parameters. “However, each of these has its own technical challenges; when embarking on one of these projects, you face significant cost, plus you need the right skills and tech to support each initiative. Each use case needs to be treated as an individual project, because you’re effectively buying for that particular need, and if you have lots of different types of data in your organisation, lots of different processes that have this level of unstructured data, you need to start again each time and buy the right solution to fix each individual problem.


Red Hat Envisions Linux Operating System As More Than ‘Just A Commodity’

Enterprise Linux company Red Hat has wanted users to think more of their operating ‘engines’ for some time now, long before the company’s acquisition and integration into the IBM family back in 2018. The company released its Red Hat Enterprise Linux 7 software back in June 2014 and followed up with Red Hat Enterprise Linux 8 in May last year. Known affectionately among the developer cognoscenti as RHEL (pronounced ‘rel’, as in relate, relish or relax), Red Hat has been building its software to specifically align to cloud-native computing, containers (a way of breaking application functions into smaller discrete blocks) and all forms of automation and AI-fuelled autonomous computing. Underpinning all the individual functions that it puts into its enterprise operating system is a desire for departments, teams and individual users to consider the OS as a performance vehicle in and of itself i.e. something more than just a commodity engine. If that sounds like marketing spin, then it probably is… so can the company substantiate any of that gloss and explain how the engine in your computer system might actually change the way we work?


T2 security chip on Macs can be hacked to plant malware; cannot be patched

The attack requires combining two other exploits that were initially used for jailbreaking iOS devices — namely Checkm8 and Blackbird. This works because of some shared hardware and software features between T2 chips and iPhones and their underlying hardware. According to a post from Belgian security firm ironPeak, jailbreaking a T2 security chip involves connecting to a Mac/MacBook via USB-C and running version 0.11.0 of the Checkra1n jailbreaking software during the Mac’s boot-up process. Per ironPeak, this works because “Apple left a debugging interface open in the T2 security chip shipping to customers, allowing anyone to enter Device Firmware Update (DFU) mode without authentication.” “Using this method, it is possible to create an USB-C cable that can automatically exploit your macOS device on boot,” ironPeak said. This allows an attacker to get root access on the T2 chip and modify and take control of anything running on the targeted device, even recovering encrypted data […] The danger regarding this new jailbreaking technique is pretty obvious. Any Mac or MacBook left unattended can be hacked by someone who can connect a USB-C cable, reboot the device, and then run Checkra1n 0.11.0.


Classifying Your Third Parties: An Essential Third Party Due Diligence First Step

Of course, this brings us to ask when a company “knows” that a third party will make an improper payment. Under the FCPA, a person has the requisite knowledge to be liable when he or she is aware of the potential wrongdoing, cognizant of a high probability of the existence of such wrongdoing, or intentionally ignorant of the potential wrongdoing. In other words, Congress did not want to allow people to “sneak around” the FCPA by using a third party. As Congress made clear, it meant to impose liability not only on those with actual knowledge of wrongdoing, but also on those who purposefully avoid actual knowledge: [T]he so-called “head-in-the-sand” problem – variously described in the pertinent authorities as “conscious disregard,” “willful blindness” or “deliberate ignorance” – should be covered so that management officials could not take refuge from the Act’s prohibitions by their unwarranted obliviousness to any action (or inaction), language or other “signaling device” that should reasonably alert them of the “high probability” of an FCPA violation.”


People-focused digital transformation: What benefit does it have for your employees?

“Digitally mature” companies, where leadership teams are proactively jumping on and implementing digital trends, are increasingly becoming a must-have for job-seekers. From attracting to retaining talent, organizations that are pioneering a digital strategy for their processes, efficiently using technology and adapting in line with digital, will undoubtedly see more success than organizations that don’t. The focus is no longer just on what an employee can bring to a company but also on what the company can deliver to the employee to develop their skill sets in preparation for the next step of their career. And, with research revealing that the benefits of a digital-first company include improved operational efficiencies as well as having a faster time to market, it’s clear why a prospective employee would opt for a digitally transformed company over one that still runs with mostly manual processes. Factors such as remote working, the use of technology to improve productivity and developing skills away from an office-based environment can lead to people enjoying their jobs more.


New ransomware vaccine kills programs wiping Windows shadow volumes

This weekend, security researcher Florian Roth released the 'Raccine' ransomware vaccine that will monitor for the deletion of shadow volume copies using the vssadmin.exe command. "We see ransomware delete all shadow copies using vssadmin pretty often. What if we could just intercept that request and kill the invoking process? Let's try to create a simple vaccine," Raccine's GitHub page explains. Raccine works by registering the raccine.exe executable as a debugger for vssadmin.exe using the Image File Execution Options Windows registry key. Once raccine.exe is registered as a debugger, every time vssadmin.exe is executed, it will also launch Raccine, which will check to see if vssadmin is trying to delete shadow copies. If it detects a process is using 'vssadmin delete' or 'vssadmin resize shadowstorage' it will automatically terminate the process, which is usually done before ransomware begins encrypting files on a computer. It should also be noted that Raccine may terminate legitimate software that uses vssadmin.exe as part of their backup routines. Roth plans on adding the ability to allow certain programs to bypass Raccine in the future so that they are not mistakenly terminated.


The Abyss of Ignorable: A Route into Chaos Testing from Starling Bank

Imagine if every abstraction came with a divinely guaranteed SLA. (They don’t.) Every class and method call, every library and dependency. Pretend that the SLA is a simple percentage. (They never are.) There are some SLAs (100%, fifty nines) for which it would be wrong to even contemplate failure let alone handle it or test for it. The seconds you spent thinking about it would already be worth more than the expected loss from failure. In such a world you would still code on the assumption that there are no compiler bugs, JVM bugs, CPU instruction bugs - at least until such things were found. On the other hand there are SLAs (95%, 99.9%) for which, at reasonable workloads, failure is effectively guaranteed. So you handle them, test for them and your diligence is rewarded. We get our behaviour in these cases right. We rightly dismiss the absurd and handle the mundane. However, human judgement fails quite badly when it comes to unlikely events. And when the cost of handling unlikely events (in terms of complication) looks unpleasant, our intuition tends to reinforce our laziness. A system does not have to be turbulent or complex to expose this. 


Announcing third-party code scanning tools: static analysis & developer security training

Code scanning is a developer-first, GitHub-native approach to easily find security vulnerabilities before they reach production. Code scanning is powered by GitHub’s CodeQL static scanning engine and is extensible to include third-party security tools. Extensibility provides a lot of flexibility and customizability for teams while maintaining the same user experience for developers. This capability is especially helpful if you: Work at a large organization that’s grown through acquisitions and has teams running different code scanning tools; Need additional coverage for specific areas such as mobile, Salesforce development, or mainframe development; Need customized reporting or dashboarding services; Or simply want to use your preferred tools while benefiting from a single-user experience and single API. What makes this possible is GitHub code scanning’s API endpoint that can ingest scan results from third-party tools using the open standard Static Analysis Results Interchange Format (SARIF). Third-party code scanning tools are initiated with a GitHub Action or a GitHub App based on an event in GitHub, like a pull request. 


It's Not Magic, It's Elastic: Getting Digital Transformation Right

Covid-19 battered many sectors, and the restaurant industry was certainly near the top of the list. Yet while lockdowns and contagion fears cratered restaurant sales in the second quarter of 2020, fast-casual chain and PwC customer Chipotle’s revenue only fell a modest 4.8%. How did they pull that off? By growing digital sales by 216%. By July, the company’s sales were rising again. Digital sales still continued to rise, too. They provided nearly half of Chipotle’s July sales. This is elasticity — a quick pivot to digital sales, then keeping that online revenue growing even as in-person purchases pick up again. Another fast-casual chain, Panera, also pivoted fast during the epidemic’s peak. While on-site dining was shut down, Panera stores sold groceries and offered them for curbside pickup. Or consider lodging, another sector that the epidemic hit especially hard. Red Roof Inns seemed to realize that their “essential” offering was private space with WiFi — so they started offering day rates to people who wanted to work from anywhere but home. These companies were elastic because they built out their digital infrastructure.



Quote for the day:

"If you want people to to think, give them intent, not instruction." -- David Marquet

Daily Tech Digest - October 05, 2020

Egregor Ransomware Adds to Data Leak Trend

As with other ransomware gangs, such as Maze and Sodinokibi, the operators behind the Egregor ransomware are threatening to leak victims' data if the ransom demands are not met within three days, according to an Appgate alert. The cybercriminals linked to Egregor are also taking a page from the Maze playbook, creating a "news" site on the darknet that offers a list of victims that have been targeted and updates about when stolen and encrypted data will be released, according to the alert. "Egregors' ransom note also says that aside from decrypting all the files in the event the company pays the ransom, they will also provide recommendations for securing the company's network, 'helping' them to avoid being breached again, acting as some sort of "black hat pentest team," according to Appgate. It's not clear how much ransom the operators behind Egregor are demanding or if any data has been leaked, according to Appgate. A copy of one ransom note posted online notes the cybercriminals plan to release stolen data through what they call "mass media." While Appgate released an alert to customers on Friday, the Egregor ransomware variant was first spotted in mid-September by several independent security researchers, including Michael Gillespie, who posted samples of the ransom note on Twitter.


Five reasons why Scrum is not helping in getting twice the work done in half the time

Do you measure the velocity of the team? Do you calculate how long a person was busy doing something? Do you measure estimated time for a task vs. actual time spent? Or measuring things like defects per story, defect removal efficiency and code coverage, etc. It is not that the above is harmful as long it is used for the right purposes like velocity for forecasting and code coverage for quality of code. But it makes more sense to measure time to market, customer satisfaction, NPS, usages index, response time, and innovation rate. If you were releasing once a year and now releasing every quarter, you have already improved by 400%, but would you like to stick here? Look at how much time your team takes from development to deployment in production? ... We wanted people to reach faster by driving faster. We taught them how to drive, manage traffic well, and put instructions everywhere, but people are still not going above 40 KM an hour. Although it has improved the overall time as there are fewer troubles while driving. When checked, people complained about 20 years old car that they have been driving. We have a similar story to our team.


How technology will shape the future of the workplace

Organisations often find it challenging to carry out business transformation projects successfully — and shaping the future of the workplace is no different. While there may be a willingness to change, there are many ways that change projects become stuck in the mire, their momentum stalled by hundreds of micro-actions taken (and not taken) throughout the organisation. The pandemic changed things. Businesses have learned that a major change project that would normally have taken six months to a year — such as enabling everyone to work remotely — can be done much faster. Necessity is indeed the mother of invention; innovation happens when people and organisations realise they have to act fast to stay competitive. ... As virtual working becomes less novel, more businesses will explore ways that they can support their employees and keep the team working efficiently. We’ll also start to see a re-evaluation of what working means. The days when it was defined by who sat at their desk the longest had already started to wane before the pandemic hit. Now, with the freedom to be creative that lockdown granted business leaders, companies are starting to look beyond hours worked and things produced and towards the quality of that work and the effect it has on the goals of the business.


Data Management skills

Nowadays, the digital transformation is actually about applying a data-driven approach to every aspect of the business in an effort to create a competitive advantage. That's why more and more companies want to build their own data lake solutions. This trend is still continuing and those skills are still in need. The most popular tools here are still HDFS for the on-prem solution and cloud data storage solutions from AWS, GCP, and Azure. Aside from that, there are also some data platforms that are trying to fill several niches and create integrated solutions, for example, Cloudera, Apache Hudi, Delta Lake. ... There are Data Warehouses where the information is sorted, ordered, and presented in the form of final conclusions(the rest is discarded), and Data Lakes — "dump everything here, because you never know what will be useful". Data Hub is focused on those who do not belong to either the first or the second category. The Data Hub architecture allows you to leave your data where it is, providing centralization of the processing but not the storage. The data is searched and accessed right where it is located at the moment. But, because the Data Hub is planned and managed, organizations must invest significant time and energy determining what their data means, where it comes from and what transformations it must complete before it can be put into the Data Hub.


These 10 tech predictions could mean huge changes ahead

According to Ashenden, the need to support creativity and innovation is urgent for businesses in the current context. As a result, the tools that enable collaboration are getting a huge boost – and not a short-term one. "Those areas will become much more central going forward," she said. "A lot of work processes that once relied on face-to-face have gone digital now, and that won't go back. Even when people are back in the office – once these things live in a digital world, that's where they live." Connectivity, according to CCS Insights, will also change as a result of the switch to remote work. From next year, the firm expects network operators to offer dedicated "work from home" packages to businesses, differentiating between corporate and personal usage, so that employers can provide staff with appropriate services such as security, collaboration tools and IT support. Operators will also increase their focus on connectivity in suburban zones, rather than city centers, as the workforce becomes increasingly established outside of the office. And as connectivity becomes ever-more important, the research firm predicts that the next three years will be rocked by governments' actions to better protect their national telecom infrastructure.


Improving Webassembly and Its Tooling -- Q&A with Wasmtime’s Nick Fitzgerald

It’s about discovering otherwise hidden and hard-to-find bugs. There’s a ton that we miss with basic unit testing, where we write out some fixed set of inputs and assert that our program produces the expected output. We overlook some code paths or we fail to exercise certain program states. The reliability of our software suffers. We are fallible, but at least we can recognize our limitations and compensate for them. Testing pseudo-random inputs helps us avoid our own biases by feeding our system “unexpected” inputs. It helps us find integer overflow bugs or pathological inputs that allow (untrusted and potentially hostile) users to trigger out-of-memory bugs or timeouts that could be leveraged as part of a denial of service attack. Some people are familiar with testing pseudo-random inputs via “property-based testing” where you assert that some property always holds and then the testing framework tries to find inputs where your invariant is violated. For example, if you are implementing the reverse method for an array, you might assert the property that reversing an array twice is identical to the original array.


7 Essentials of Digital Transformation Success

Consumers have come to expect organizations to use their personal information to create custom solutions. Especially during the pandemic, consumers have become accustomed to the benefits of Netflix and Spotify using machine learning for entertainment recommendations, Zoom using just a couple clicks to create video engagement, and Google Home or Amazon Alexa using voice for everything from answering inquiries to simplifying shopping. These same consumers expect their bank or credit union to use their relationship date, behaviors and preferences the same way … or better. But, advanced analytics and AI should not be a goal in and of itself. These tools should be used to support broader strategies. According to Wharton, “Instead of exhaustively looking for all the areas AI could fit in, a better approach would be for companies to analyze existing goals and challenges with a close eye for the problems that AI is uniquely equipped to solve.” Some solutions include everything from fraud detection to facilitating predictive solution recommendations for customers. Now more than ever, AI needs to be used to deliver human-like intelligence across the entire organization.


Inadequate skills and employee burnout are the biggest barriers to digital transformation

The ongoing disruption of the pandemic has shown how important it can be for businesses to be built for change. Many executives are facing demand fluctuations, new challenges to support employees working remotely and requirements to cut costs. In addition, the study reveals that the majority of organizations are making permanent changes to their organizational strategy. For instance, 94% of executives surveyed plan to participate in platform-based business models by 2022, and many reported they will increase participation in ecosystems and partner networks. Executing these new strategies may require a more scalable and flexible IT infrastructure. Executives are already anticipating this: the survey showed respondents plan a 20 percentage point increase in prioritization of cloud technology in the next two years. What’s more, executives surveyed plan to move more of their business functions to the cloud over the next two years, with customer engagement and marketing being the top two cloudified functions. COVID-19 has disrupted critical workflows and processes at the heart of many organizations’ core operations. Technologies like AI, automation and cybersecurity that could help make workflows more intelligent, responsive and secure are increasing in priority across the board for responding global executives.


Is Cloud Migration a Path to Carbon Footprint Reduction?

Energy efficiency with an enterprise may go hand in hand with other organizational traits, according to the report. Accenture’s research from 2013 to 2019 found that companies that consistently earned high marks on environmental, social, and governance performance also saw operating margins 4.7x higher than organizations with lower performance in those areas. There were also indications of higher annual returns to shareholders among those environmentally minded enterprises. In addition to the potential benefit cloud migration presents for the environment, Accenture’s report shows there can be total cost of ownership savings of up to 30-40% when organizations migrate to more cost-efficient public clouds. The report also shed light on how cloud migration affected Accenture’s expenses. The firm runs 95% of its applications in the cloud, the report says. After its third year of migration, Accenture saw $14.5 million in benefits, plus another $3 million in annualized costs saved by right sizing its service consumption. Moving to the cloud might not mean much in terms of cutting energy consumption if the service provider does not take steps to be more energy efficient.


Neuromorphic computing could solve the tech industry's looming crisis

Rather than separate out the memory and computing like most chips in use today, neuromorphic hardware keeps both together, with processors having their own local memory -- a more brain-like arrangement -- that saves energy and speeds up processing. Neuromorphic computing could also help spawn a new wave of artificial intelligence (AI) applications. Current AI is usually narrow and developed by learning from stored data, developing and refining algorithms until they reliably match a particular outcome. Using neuromorphic tech's brain-like strategies, however, could allow AI to take on new tasks. Because neuromorphic systems can work like the human brain -- able to cope with uncertainty, adapt, and use messy, confusing data from the real world -- it could lay the foundations for AIs to become more general. "The more brain-like workloads approximate computing, where there's more fuzzy associations that are in play -- this rapid adaptive behaviour of learning and self modifying the programme, so to speak. These are types of functions that conventional computing is not so efficient at and so we were looking for new architectures that can provide breakthroughs," says Mike Davies



Quote for the day:

"It's not the position that makes the leader. It's the leader that makes the position." -- Stanley Huffty

Daily Tech Digest - October 04, 2020

What Is Dark Data Within An Organisation?

In the universe of information assets, data may be deemed dark for a number of various reasons either because it’s unstructured or because it’s behind a firewall. Or it may be dark due to the speed or volume or because people simply have not made the connections between the different data sets. This could also be because they do not lie in a relational database or because until recently, the techniques required to leverage the data effectively did not exist. Dark data is often text-based and stays within company firewalls but remains very much untapped.  For instance, supply chain complexity is a significant challenge for organisations. The supply chain is a data-driven industry traversing across a network of global suppliers distribution channels and customer base. This industry churns out data in huge numbers given that an estimated that only 5% of data is being used. So while 95% of such data is not being utilised for analytics, it presents an opportunity for big data technologies to bring this dark data to light.  To date, organisations have explored only a small fraction of the digital universe for data analytic value. Dark analytics is about turning dark data into intelligence and insight that a company can use.


Quantum computing meets cloud computing

As part of Leap, developers can also use a feature called the hybrid solver service (HSS), which combines both quantum and classical resources to solve computational problems. This "best-of-both-worlds" approach, according to D-Wave, enables users to submit problems of ever-larger sizes and complexities. Advantage comes with an improved HSS, which can run applications with up to one million variables – a jump from the previous generation of the technology, in which developers could only work with 10,000 variables. "When we launched Leap last February, we thought that we were at the beginning of being able to support production-scale applications," Alan Baratz, the CEO of D-Wave, told ZDNet. "For some applications, that was the case, but it was still at the small end of production-scale applications." "With the million variables on the new hybrid solver, we really are at the point where we are able to support a broader array of applications," he continued. A number of firms, in fact, have already come to D-Wave with a business problem, and a quantum-enabled solution in mind. According to Baratz, in many cases customers are already managing the small-scale deployment of quantum services, and are now on the path to full-scale implementation.


H&M Hit With Record-Breaking GDPR Fine Over Illegal Employee Surveillance

Swedish multinational retail company H&M has been hit with a monumental €35 million ($41.3 million) GDPR fine for illegally surveilling employees in Germany. The Data Protection Authority of Hamburg (HmbBfDI) announced the fine on Thursday after the company was found to have excessively monitored several hundred employees in a Nuremberg service centre. The watchdog said that since at least 2014, parts of the workforce had been subject to "extensive recording of details about their private lives".  "After absences such as vacations and sick leave the supervising team leaders conducted so-called Welcome Back Talks with their employees. After these talks, in many cases not only the employees' concrete vacation experiences were recorded, but also symptoms of illness and diagnoses,” HmbBfDI said. “In addition, some supervisors acquired a broad knowledge of their employees' private lives through personal and floor talks, ranging from rather harmless details to family issues and religious beliefs.” The extensive data collection was exposed in October 2019 when such data became accessible company-wide for several hours due to a configuration error.


How CIOs can convert Data Lakes into profit centres

Most CIOs today are comfortable with traditional concepts of BI and Data Warehousing. These mature technologies have worked well to help the organization gain insights into what happened in the past - but are no longer sufficient by themselves. ML and AI are required technologies today for generating the next set of competitive advantages - predicting the future, gaining deep insights from unstructured data and creating data-driven products. Relational Databases are often incapable of handling rapidly evolving data formats and unstructured data, like natural language text and multimedia, which are the fuel for this ML and AI-driven revolution. ... Exploding data sizes, increasing Data Democratization and increasingly rich and complex data processing workloads mean the traditional on-premise hardware has a hard time keeping up. Processing power of modern processors for AI and ML (GPUs/TPUs) are doubling every few months - leaving Moore’s law in the dust. Capital sunk in on-premise hardware becomes obsolete faster than ever. Rapidly innovating hardware in the cloud enables new classes of applications or breaks performance barriers for old ones.


Data Governance & Privacy Best Practices to Lower Risk and Drive Value

An enterprise-wide data governance program is your key to accelerating digital transformation programs such as cloud migration, improving customer experience with trust assurance, and lowering operating expenses when data use is optimized, in line with your corporate policies. In today’s world with more data being available from more sources, it’s no surprise that we look for an automated and scalable methodology to manage all this information. Data governance is a discipline that encompasses the rules, policies, roles, responsibilities, and tools we put in place to ensure our data is accurate, consistent, complete, available, and secure to enable trust in the outcomes we plan to achieve.  From my experience, these are three best practices around governing data to maximize the success of business transformation agendas, reduce uncertainty, and ensure safe and appropriate data use. ... Leading global organizations are leveraging Informatica’s integrated and intelligent Data Governance and Privacy solution portfolio to proactively add value to their bottom line today. It is about getting the right information to the right people at the right time, enabling the entire organization to be proactive, in order to identify and act on new opportunities and plan for the best results, instead of reacting to unanticipated surprises.


Cyber-attack victim CMA CGM struggling to restore bookings, say customers

As CMA CGM’s IT engineers continue, for the fifth day, to try to restore its systems following a cyber-attack at the weekend, the French carrier has come under mounting criticism from customers that its back-up booking process is inadequate. Yesterday, the carrier said its “back-offices [shared services centres] are gradually being reconnected to the network, thus improving bookings and documentation processing times”. And it reiterated that bookings could still be made through the INTTRA portal, as well as manually via an Excel form attached to an email. However, Australian forwarder and shipper representatives, the Freight & Trade Alliance (FTA) and Australian Peak Shippers Association (APSA), described the measures as “failing to adequately provide contingency services”. John Park, head of business operations at FTA/APSA, said its members ought to be due compensation from the carrier and its subsidiary, Australia National Line, which operates some 14 services to Australia, according to the eeSea liner database. “FTA/APSA has reached out again to senior CMA CGM management to seek advice as to when we can expect full service to be re-instated, implementation of workable contingency arrangements and acceptance that extra costs incurred ...


Researchers create a graphene circuit that makes limitless power

The breakthrough is an offshoot of research conducted three years ago at the University of Arkansas that discovered that freestanding graphene, which is a single layer of carbon atoms, ripples, and buckles in a way that holds potential energy harvesting capability. The idea was controversial because it does refute a well-known assertation from physicist Richard Feynman about the thermal motion of atoms, known as Brownian motion, cannot do work. However, the University researchers found at room temperature thermal motion of graphene does induce an alternating current in a circuit. The achievement was previously thought to be impossible. Researchers also discovered their design increased the amount of power delivered. Researchers say they found the on-off, switch-like behavior of the diodes amplifies the power delivered rather than reducing it as previously believed. Scientists on the project were able to use a relatively new field of physics to prove diodes increase the circuit’s power. That emerging field is called stochastic thermodynamics. Researchers say that the graphene and the circuit share a symbiotic relationship.


Frameworks for Data Privacy Compliance

As new privacy regulations are introduced, organizations that conduct business and have employees in different states and countries are subject to an increasing number of privacy laws, making the task of maintaining compliance more complex. While these laws require organizations to administer reasonable security implementations, they do not outline what specific actions should be taken to satisfy this requirement. As a result, many risk managers are turning to proven security frameworks that specifically address privacy. Doing so can help organizations build privacy and security programs that make compliance more manageable, even when beholden to multiple regulations. While no two frameworks are the same, each is designed to help organizations identify and address potential security gaps that could negatively impact data privacy. Such frameworks include the Center for Internet Security (CIS) Top 20, Health Information Trust Alliance Common Security Framework (HITRUST CSF), and the National Institute of Standards and Technology (NIST) Framework. ... Originally designed for health care organizations and third-party vendors that serve health care clients, HITRUST CSF leads organizations beyond baseline security practices to establish a strong, mature security program.


Selecting Security and Privacy Controls: Choosing the Right Approach

The baseline control selection approach uses control baselines, which are pre-defined sets of controls assembled to address the protection needs of a group, organization, or community of interest. Security and privacy control baselines serve as a starting point for the protection of information, information systems, and individuals’ privacy. Federal security and privacy control baselines are defined in draft NIST Special Publication 800-53B. The three security control baselines contain sets of security controls and control enhancements that offer protection for information and information systems that have been categorized as low-impact, moderate-impact, or high-impact—that is, the potential adverse consequences on the organization’s missions or business operations or a loss of assets if there is a breach or compromise to the system. The system security categorization, risk assessment, and security requirements derived from stakeholder protection needs, laws, executive orders, regulations, policies, directives, and standards can help guide and inform the selection of security control baselines from draft Special Publication 800-53B.


Emerging challenges and solutions for the boards of financial-services companies

Actions by boards reflect the increased attention all financial firms are now devoting to cyberrisk. Ninety-five percent of board committees, for example, discuss cyberrisks and tech risks four times or more a year (Exhibit 1). One such firm holds optional deep-dive sessions the week before each quarter’s board meeting. These sessions cover relevant topics, such as updates on the current intelligence on threats, case studies of recent breaches that could affect the company or others in the industry, and the impact of regulatory changes. ... There has been a remarkable shift in board awareness of cybersecurity in the past few years: for example, earlier McKinsey research, from 2017, suggested that only 25 percent of all companies gave their boards information-technology and security updates more than once a year. More frequent and consistent communication between board members and senior management on this topic now enables boards to understand the financial, operational, and technological implications of emerging cybersecurity threats for the business and to guide its direction accordingly. Firms increasingly recruit experts for these committees.



Quote for the day:

"Superlative leaders are fully equipped to deliver in destiny; they locate eternally assigned destines." -- Anyaele Sam Chiyson

Daily Tech Digest - October 03, 2020

Years-Long ‘SilentFade’ Attack Drained Facebook Victims of $4M

“Our investigation uncovered a number of interesting techniques used to compromise people with the goal to commit ad fraud,” said Sanchit Karve and Jennifer Urgilez with Facebook, in a Thursday analysis unveiled this week at the Virus Bulletin 2020 conference. “The attackers primarily ran malicious ad campaigns, often in the form of advertising pharmaceutical pills and spam with fake celebrity endorsements.” Facebook said that SilentFade was not downloaded or installed by using Facebook or any of its products. It was instead usually bundled with potentially unwanted programs (PUPs). PUPs are software programs that a user may perceive as unwanted; they may use an implementation that can compromise privacy or weaken user security. In this case, researchers believe the malware was spread via pirated copies of popular software (such as the Coreldraw Graphics graphic design software for vector illustration and page layout, as seen below). Once installed, SilentFade stole Facebook credentials and cookies from various browser credential stores, including Internet Explorer, Chromium and Firefox.


How to be great at people analytics

Most companies still face critical obstacles in the early stages of building their people analytics capabilities, preventing real progress. The majority of teams are still in the early stages of cleaning data and streamlining reporting. Interest in better data management and HR technologies has been intensive, but most companies would agree that they have a long way to go. Leaders at many organizations acknowledge that what they call their “analytics” is really basic reporting with little lasting impact. For example, a majority of North American CEOs indicated in a poll that their organizations lack the ability to embed data analytics in day-to-day HR processes consistently and to use analytics’ predictive power to propel better decision making.3 This challenge is compounded by the crowded and fragmented landscape of HR technology, which few organizations know how to navigate. So, while the majority of people analytics teams are still taking baby steps, what does it mean to be great at people analytics? We spoke with 12 people analytics teams from some of the largest global organizations in various sectors—technology, financial services, healthcare, and consumer goods—to try to understand what teams are doing, the impact they are having, and how they are doing it.


6 Data Management Tips for Small Business Owners

You might not have the vast resources and people-power of your larger competitors, but even small e-commerce organizations can glean useful insights from data if it is presented in an engaging way. Rather than relying on raw, potentially overwhelming databases full of indecipherable figures, you should aim to generate reports which showcase pertinent trends visually. This should let you analyze information more precisely and without needing to spend hours sifting through spreadsheets. In addition, data visualization has the benefit of making it straightforward to share your findings with others, whether or not they have a background in data science and analysis. A chart or graph can express everything you need to get across in a presentation about sales projections, site performance, and customer satisfaction, without needing lengthy verbal explanations as well. While the biggest scandals involving data loss and theft tend to hit the headlines whenever they involve major organizations and internationally recognized brands, that does not mean that smaller firms are immune from scrutiny in this respect.


Metasploit — A Walkthrough Of The Powerful Exploitation Framework

If you hack someone without permission, there is a high chance that you will end up in jail. So if you are planning to learn hacking with evil intentions, I am not responsible for any damage you cause. All my articles are purely educational. So, if hacking is bad, why learn it in the first place? Every device on the internet is vulnerable by default unless someone secures it. It's the job of the penetration tester to think like a hacker and attack their organization’s systems. The penetration tester then informs the organization about the vulnerabilities and advises on patching them. Penetration testing is one of the highest-paid jobs in the industry. There is always a shortage of pen-testers since the number of devices on the internet is growing exponentially. I recently wrote an article on the top ten tools you should know as a cybersecurity engineer. If you are interested in learning more about cybersecurity, check out the article here. Right. Enough pep talk. Let’s look at one of the coolest pen-testing tools in the market — Metasploit. ... Metasploit is an open-source framework written in Ruby. It is written to be an extensible framework, so that if you want to build custom features using Ruby, you can easily do that via plugins.


IoT in Manufacturing: The Success Story Nobody's Talking About

Efficient manufacturing processes rely almost entirely on predictability. Factory operators need to know how long each step in a process takes, what resources are needed, and how long the process can operate continuously before needing breaks for maintenance and other periodic tasks. That overarching need for predictability makes it difficult for operators to know how the addition of new equipment might impact output. It also makes them hesitant to make changes to existing equipment, even if they’re all but certain that the changes would be an improvement. That brings us to another vital and emerging use of IoT technology in manufacturing. Factory operators are using the myriad data streaming from their connected devices to make precise computer models of their industrial equipment. These digital twins, as they’re known, allow operators to test any proposed equipment tweaks or replacements to see the exact effect they’ll have on the output. This helps them to make seamless upgrades and changes to their processes without fear of upsetting the delicate balance that ensures predictability. If the question is whether IoT is living up to its promise and proving useful in manufacturing – the answer is a resounding yes.


Digital Transformation Can Be Risky. Here’s What You Need To Know

The business mantra “culture eats strategy for breakfast” applies differently when you’re talking about digital transformation, said Pam Hrubey, managing director in consulting services at Crowe. For example, an American-headquartered durable goods company acquired businesses across the globe. The company needed to upgrade equipment and streamline IT processes, but it chose to begin the transformation by attempting to align cultures between the parent company and the businesses abroad. Its initial process led to discontent among international workers who ended up feeling like outsiders because they were not made aware that the goal was to sync technology and processes. “To transform a business practice or to change a business model, you have to have a robust plan,” Hrubey said. “When you start with culture you often confuse people if you don’t have a plan in place, if people don’t understand what change is planned or why a change is necessary.” Companies also need to understand that a transformation affects the entire organization and might include stakeholders across departments.  “So many different people in the company need to come together to do it right,” said Czerwinski.


Data Protection Techniques Needed to Guarantee Privacy

Traditionally, a risk hierarchy existed between these two types of attributes. Direct identifiers were perceived as more “sensitive” than quasi-identifiers. In many data releases, only the former attributes were subject to some privacy protection mechanism, while the latter were released in clear. Such releases were often followed by prompt re-identification of the supposedly ‘protected’ subjects. It soon became apparent that quasi-identifiers could be just as ‘sensitive’ as direct identifiers. With the GDPR, this notion has finally made it into law: both types of attributes are put on the same level, identifiers and quasi-identifiers attributes are personal data and present an equally important privacy breach risk. Nowadays protection laws strictly regulate personal data processing. This makes a strong case for implementing privacy protection techniques. Indeed, failure to comply exposes companies to severe penalties. Besides, implementing proper privacy protections might lead to customer trust increase. In a world plagued by data breaches and privacy violations, people are increasingly concerned about what happens to their data. And finally, data breaches targeting personal data are costing companies money. Personal data remains the most expensive item to lose in a breach.


How AI Is Used in Data Center Physical Security Today

"There is a critical need to make full use of the massive amounts of data being generated by video surveillance cameras and AI-based solutions are the only practical answer," Memoori managing director James McHale said in a recent report. Video surveillance cameras generate a massive amount of data, McHale told DCK, and AI is the only practical way to process it all. AI systems can also be used to analyze thermal images. "Thermal cameras have been a significant growth area this year as a direct consequence of the COVID-19 pandemic," he told us. Today, many thermal cameras are just thermal information, but customers are increasingly looking for systems with cameras that can collect both thermal and traditional images and apply neural network algorithms for processing them. But there's a general lack of understanding about how to use this technology appropriately for pandemic controls, he added. Plus, the pandemic is negatively affecting some sectors of the economy, impacting spending and changing the way that companies buy technology. "Customers will be demanding more value from their investments and will be less willing to commit to upfront capital expenditure," he said.


QR Codes: A Sneaky Security Threat

Hacking an actual QR code would require some serious skills to change around the pixelated dots in the code’s matrix. Hackers have figured out a far easier method instead. This involves embedding malicious software in QR codes (which can be generated by free tools widely available on the internet). To an average user, these codes all look the same, but a malicious QR code can direct a user to a fake website. It can also capture personal data or install malicious software on a smartphone that initiates actions like this: Add a contact listing: Hackers can add a new contact listing on the user’s phone and use it to launch a spear phishing or other personalized attack; Initiate a phone call: By triggering a call to the scammer, this type of exploit can expose the phone number to a bad actor; Text someone: In addition to sending a text message to a malicious recipient, a user’s contacts could also receive a malicious text from a scammer; Write an email: Similar to a malicious text, a hacker can draft an email and populate the recipient and subject lines. Hackers could target the user’s work email if the device lacks mobile threat protection ...


Exploiting enhanced data management to create value in the ‘new normal’

The pandemic has fundamentally changed the way people view, access and retrieve data. It has also put new burdens on already stretched IT departments and electronic delivery – now that the footprint of use has extended to people’s homes. Data management upgrades can deliver significant benefits: An investment in advanced data management services offers the opportunity to automate and enhance process and workflow efficiency, eliminating errors and freeing up staff to focus on creating value elsewhere; and Machine Learning technologies offer new opportunities to make better use of your data – to implement data copy management now that digital archives have become even more important, apply proper retention strategies, as well as unearth new revenue streams and cost saving opportunities. ... These days, virtually every human on the planet is taking up data, and the pandemic has made consumption grow even faster. Each meme or news story shared and every meeting recorded all needs to be stored somewhere. And the larger the army of remote workers conducting business from their home offices, the greater data storage capacity will be required by every company.



Quote for the day:

"However beautiful the strategy, you should occasionally look at the results." -- Winston Churchill

Daily Tech Digest - October 02, 2020

Time to reset long-held habits for a new reality

With an extended crisis a real possibility, new habits must be adopted and embraced for the business to adapt, recover and operate successfully in the long-term. It’s important for CIOs to take the time to understand these habits, how they have formed and if they are here to stay. One of the more obvious habit changes we’ve all experienced is the shift from physical meetings, where cases were presented and decisions made in person, to virtual conferences. This has made people feel more exposed in decision making as the human interaction of reading body language has been lost. However, people have unknowingly started using data more and have shifted to making more data driven decisions. If new and initial habits are here to stay for the long-term, CIOs must embed them into the new DNA of the business. If they aren’t, however, it’s crucial to curb and manage these new habits before they become automatically ingrained and costly to reverse. This happened to a CIO I recently spoke with, who made a massive technology investment, changed vendors and even shortened office leases in the rush to shift their organisation to a remote working model. 


Getting Serious About Data and Data Science

The obvious approach to addressing these mistakes is to identify wasted resources and reallocate them to more productive uses of data. This is no small task. While there may be budget items and people assigned to support analytics, AI, architecture, monetization, and so on, there are no budgets and people assigned to waste time and money on bad data. Rather, this is hidden away in day-in, day-out work — the salesperson who corrects errors in data received from marketing, the data scientist who spends 80% of his or her time wrangling data, the finance team that spends three-quarters of its time reconciling reports, the decision maker who doesn’t believe the numbers and instructs his or her staff to validate them, and so forth. Indeed, almost all work is plagued by bad data. The secret to wasting less time and money involves changing one’s approach from the current “buyer/user beware” mentality, where everyone is left on their own to deal with bad data, to creating data correctly — at the source. This works because finding and eliminating a single root cause can prevent thousands of future errors and eliminate the need to correct them downstream. This saves time and money — lots of it! The cost of poor data is on the order of 20% of revenue, and much of that expense can be eliminated permanently.


Most Data Science Projects Fail, But Yours Doesn’t Have To

Through data science automation, companies are not only able to fail faster (which is a good thing in the case of data science), but to improve their transparency efforts, deliver minimum value pipelines (MVPs), and continuously improve through iteration. Why is failing fast a positive? While perhaps counterintuitive, failing fast can provide a significant benefit. Data science automation allows technical and business teams to test hypotheses and carry out the entire data science workflow in days. Traditionally, this process is quite lengthy — typically taking months — and is extremely costly. Automation allows failing hypotheses to be tested and eliminated faster. Rapid failure of poor projects provides savings both financially as well as in increased productivity. This rapid try-fail-repeat process also allows businesses to discover useful hypotheses in a more timely manner. Why is white box modelling important? White-box models (WBMs) provide clear explanations of how they behave, how they produce predictions, and what variables influenced the model. WBMs are preferred in many enterprise use cases because of their transparent ‘inner-working’ modeling process and easily interpretable behavior.


Microsoft: Hacking Groups Shift to New Targets

Microsoft notes that, in the last two years, the company has sent out 13,000 notifications to customers who have been targeted by nation-states. The majority of these nation-state attacks originate in Russia, with Iran, China and North Korea also ranking high, according to Microsoft. The U.S. was the most frequent target of these nation-state campaigns, accounting for nearly 70% of the attacks Microsoft tracked, followed by the U.K., Canada, South Korea and Saudi Arabia. And while critical infrastructure remains a tempting target for sophisticated hacking groups backed by governments, Microsoft notes that organizations that are deemed noncritical are increasingly the focus of these campaigns. "In fact, 90% of our nation-state notifications in the past year have been to organizations that do not operate critical infrastructure," Tom Burt, corporate vice president of customer security and trust at Microsoft, writes in a blog post. "Common targets have included nongovernmental organizations, advocacy groups, human rights organizations and think tanks focused on public policy, international affairs or security. This trend may suggest nation-state actors have been targeting those involved in public policy and geopolitics, especially those who might help shape official government policies."


Why Perfect Technology Abstractions Are Sure To Fail

Everything’s an abstraction these days. How many “existential threats” are there? We need “universal” this and that, but let’s not forget that relativism – one of abstraction’s enforcers – is hovering around all the time making things better or worse, depending on the objective of the solution du jour. Take COVID-19, for example. Based upon the assumption that the US knows how to solve “enterprise” problems – the abstract principle at work – the US has done a great job. But relativism kills the abstraction: the US has roughly 4% of the world’s population and 25% of the world’s deaths. How many technology solutions sound good in the abstract, but are relatively ineffective?  The Agile family is an abstract solution to an age-old problem: requirements management and timely cost-effective software applications design and development. But the relative context is way too frequent failure. We’ve been wrestling with requirements validation for decades, which is why the field constantly invented methods, tools and techniques to manage requirements and develop applications, like rapid application development (RAD), rapid prototyping, the Unified Process (UP) and extreme programming (XP), to name a few. 


.NET Framework Connection Pool Limits and the new Azure SDK for .NET

Connection pooling in the .NET framework is controlled by the ServicePointManager class and the most important fact to remember is that the pool, by default, is limited to 2 connections to a particular endpoint (host+port pair) in non-web applications, and to unlimited connection per endpoint in ASP.NET applications that have autoConfig enabled (without autoConfig the limit is set to 10). After the maximum number of connections is reached, HTTP requests will be queued until one of the existing connections becomes available again. Imagine writing a console application that uploads files to Azure Blob Storage. To speed up the process you decided to upload using using 20 parallel threads. The default connection pool limit means that even though you have 20 BlockBlobClient.UploadAsync calls running in parallel only 2 of them would be actually uploading data and the rest would be stuck in the queue. The connection pool is centrally managed on .NET Framework. Every ServiceEndpoint has one or more connection groups and the limit is applied to connections in a connection group.


Digital transformation: The difference between success and failure

Commenting on the survey, Ritam Gandhi, founder and director of Studio Graphene, said: "They say necessity is the mother of invention, and the pandemic is evidence of that. While COVID-19 has put unprecedented strain on businesses, it has also been key to fast-tracking digital innovation across the private sector. "The research shows that the crisis has prompted businesses to break down the cultural barriers which previously stood in the way of experimenting with new digital solutions. This accelerated digital transformation offers a positive outlook for the future -- armed with technology, businesses will now be much better-placed to adapt to any unforeseen challenges that may come their way." Digital transformation, whatever precise form it takes, is built on the internet and so, even in normal times, internet infrastructure needs to be robust. In abnormal times such as the current pandemic, with widespread remote working and increased reliance on online services generally, a resilient internet is vital. So how did it hold up in the first half of 2020?


From Cloud to Cloudlets: A New Approach to Data Processing?

Though the term “cloudlet” is still relatively new (and relatively obscure) the central concept of it is not. Even from the earliest days of cloud computing, it was recognized that sending large amounts of data to the cloud to be processed raises bandwidth issues. Over much of the past decade, this issue has been masked by the relatively small amounts of data that devices have shared with the cloud. Now, however, the limitations of the standard cloud model are becoming all too clear. There is a growing consensus that the growing volume of end-device data to the cloud for processing is too resource-intensive, time-consuming, and inefficient to be processed by large, monolithic clouds. Instead, say some analysts, these data are better processed locally. This processing will either need to take place in the device that is generating these data, or in a semi-local cloud that is interstitial between the device and an organization's central cloud storage. This is what is meant by a "cloudlet”: intelligent device, cloudlet, and cloud.


Align Your Data Architecture with the Strategic Plan

Data collected today impacts business direction and growth for tomorrow. The benefits to having and using data that align with strategic goals include the ability to make evidence-based decisions, which can provide insights on how to reduce costs and increase efficiency of other resource utilization. Data are only valuable when they correlate to a company’s working goals. That means available data should assist in making the most important decisions at the present time. Data-based decision-making also coincides with lower overall costs. Examples of data that should be considered in any data set include digital data, such as web traffic, customer relationship management (CRM) data, email marketing data, customer service data, and third-party data. ... For some data sets, there may not be a need (and therefore the associated costs) for big data processing. Collecting all data that exists, just because it is available, does not guarantee inherent value to the company. Furthermore, data from multiple sources may not be structured and may require heavy lifting on the processing side. Secondly, clearly defined data points, such as demographics, financial background and market trends, will add varying value to any organization and predict the volume of data and processing needed for meaningful optimization.


Information Quality Characteristics

A personal experience involved the development of an initial data warehouse for global financial information. The initial effort was to build a new source of global information that would be more available and would allow senior management to monitor the current month’s progress toward budget goals for gross revenue and other profit and loss (P&L) items. The effort was to build the information from the source systems that feed the process used to develop the P&L statements. To deliver information that would be believable to the senior executives, a stated goal was to match the published P&L information. After a great deal of effort, the initial goal was changed to deliver the capability for gross revenue. This change was necessitated because there was no consistent source data for the other P&L items. Even the new goal proved elusive as the definition for gross revenue varied among the over 75 corporate subsidiaries. Initial attempts to aggregate sales for a subsidiary that matched reported amounts proved to be extremely challenging. The team had to develop a different process to aggregate sales for each subsidiary. Unfortunately, that process was not always successful in matching the published revenue amounts.



Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik

Daily Tech Digest - October 01, 2020

Levelling the playing field: 3 tips for women on breaking into tech

Do you worry over work decisions? Do you negatively compare your work to others? Chances are you’ve experienced imposter syndrome. And you’re far from alone — 90% of women in the UK experience it too. As Kim Diep from Trainline mentioned at Code Fest: “No matter what level you are in, in your tech career, I think everyone has some moments of self-doubt where they feel like they’re not good enough.” When you feel insecure, it’s easy to bottle those feelings up and keep your head down. To combat this, step out of your comfort zone and face these insecurities head-on. Remember, you were hired because of skills, talent and experience — not by luck! You don’t have to dive straight into delivering your next company all-hands. However, trying something as simple as active participation in meetings can help boost confidence. ... Whether you’re looking to transition into a tech-based career or have worked in the industry for years, mentors are an invaluable source of wisdom, experience and relationships. Look to your managers for advice — that’s what they are there for. Join webinars or virtual events, ask questions and don’t be afraid to drop someone you admire a friendly LinkedIn note to see if they’d be up for sharing any tips.


Why Every DevOps Team Needs A FinOps Lead

FinOps is the operating model for the cloud. FinOps enables a shift — a combination of systems, best practices, and culture — to increase an organization’s ability to understand cloud costs and make tradeoffs. In the same way that DevOps revolutionized development by breaking down silos and increasing agility, FinOps increases the business value of cloud by bringing together technology, business, and finance professionals with a new set of processes. Simply put, FinOps applies the same principles of DevOps to financial and operational management of cloud assets and infrastructure. Ideally, this means managing those assets through code rather than human interventions. To do this effectively, a FinOps practitioner must understand the patterns of both customer usage and product requirements, and map those correctly to maximize value while continuing to optimize for customer experience. ... When we started our FinOps project, all we had to work with were flat data files that lacked key information. With these flat files, we had no easy means of attributing dollar values to specific projects or research deployments. Needless to say, this was a nightmare.


Three Reasons AI-Powered Platforms Fail

First and foremost, businesses must have a clear idea of exactly what they want to replace with machines. If you shoot for the moon before understanding gravity, you're not going to get very far. When it comes to building AI-powered platforms, you have to build up to solving the big-picture problem by first automating lots of small functions and tasks. Often, businesses automate the wrong things and end up creating technology that is unable to deliver on its promise. Start by studying the industry to understand the most mundane, time-consuming, human-intensive or manual processes of a task or function; focus on areas like repetitive tasks, data entry, common requests, etc. This is where your automation work should begin. It is paramount that the foundational elements of an AI-powered platform are consistently operating with 100% accuracy before moving on to building the next layer of automation. ... It's a given you need to hire strong data scientists and technologists experienced in AI, machine learning and natural language processing, and many businesses are following this protocol: Job postings for AI-related roles grew 14% year over year prior to the Covid-19 outbreak in early March 2020.


Rethinking risk and compliance for the Age of AI

At its core, risk management refers to a company’s ability to identify, monitor and mitigate potential risks, while compliance processes are meant to ensure that it operates within legal, internal and ethical boundaries. These are information-intensive activities – they require collecting, recording and especially processing a significant amount of data and as such are particularly suited for deep learning, the dominant paradigm in AI. Indeed, this statistical technique for classifying patterns – using neural networks with multiple layers – can be effectively leveraged for improving analytical capabilities in risk management and compliance. ... early experience shows that AI can create new types of risks for businesses. In hiring and credit, AI may amplify historical bias against female and minority background applicants, while in healthcare it may lead to opaque decisions because of its black box problem, to name just a few. These risks are amplified by the inherent complexity of deep learning models which may contain hundreds of millions of parameters. This encourages companies to procure third-party vendors’ solutions about which they know little of the inner functioning.

An introduction to web application firewalls for Linux sysadmins

Much like "normal" firewalls, a WAF is expected to block certain types of traffic. To do this, you have to provide the WAF with a list of what to block. As a result, early WAF products are very similar to other products such as anti-virus software, IDS/IPS products, and others. This is what is known as signature-based detection. Signatures typically identify a specific characteristic of an HTTP packet that you want to allow or deny. ... Signatures work pretty well but require a lot of maintenance to ensure that false positives are kept to a minimum. Additionally, writing signatures is often more of an art form rather than a straightforward programming task. And signature writing can be quite complicated as well. You're often trying to match a general attack pattern without also matching legitimate traffic. To be blunt, this can be pretty nerve-racking. ... In the brave new world of dynamic rulesets, WAFs use more intelligent approaches to identifying good and bad traffic. One of the "easier" methods employed is to put the WAF in "learning" mode so it can monitor the traffic flowing to and from the protected web server. The objective here is to "train" the WAF to identify what good traffic looks like. 


Cryptojacking: The Unseen Threat

The reasons around why cryptojacking is more prolific is threefold: It doesn't require elevated permissions, it is platform agnostic, and it rarely sets off antivirus triggers. In addition, the code is often small enough to insert surreptitiously into open source libraries and dependencies that other platforms rely on. It can also be configured to throttle based on the device, as well as use a flavor of encrypted DNS, in order not to arouse suspicions. Cryptojacking can also be built for almost any context and in various languages such as JavaScript, Go, Ruby, Shell, Python, PowerShell, etc. As long as the malware can run local commands, it can utilize CPU processing power and start mining cryptocurrency. In addition to entire systems, cryptominers can thrive in small workhorse environments, such as Docker containers, Kubernetes clusters, and mobile devices, or leverage misconfigured cloud instances and overpermissioned accounts. The possibilities are endless. ... In addition to the huge number of targets, corporate data breaches are heavily underreported because laws vary by jurisdiction on when a company is required to report a breach.


Speeding up HTTPS and HTTP/3 negotiation with... DNS

The fundamental problem comes from the fact that negotiation of HTTP-related parameters (such as whether HTTPS or HTTP/3 can be used) is done through HTTP itself (either via a redirect, HSTS and/or Alt-Svc headers). This leads to a chicken and egg problem where the client needs to use the most basic HTTP configuration that has the best chance of succeeding for the initial request. In most cases this means using plaintext HTTP/1.1. Only after it learns of parameters can it change its configuration for the following requests. But before the browser can even attempt to connect to the website, it first needs to resolve the website’s domain to an IP address via DNS. This presents an opportunity: what if additional information required to establish a connection could be provided, in addition to IP addresses, with DNS? That’s what we’re excited to be announcing today: Cloudflare has rolled out initial support for HTTPS records to our edge network. Cloudflare’s DNS servers will now automatically generate HTTPS records on the fly to advertise whether a particular zone supports HTTP/3 and/or HTTP/2, based on whether those features are enabled on the zone. The new proposal, currently discussed by the Internet Engineering Task Force (IETF) defines a family of DNS resource record types (“SVCB”) that can be used to negotiate parameters for a variety of application protocols.


Microsoft Issues Updated Patching Directions for 'Zerologon'

Microsoft issued a four-step plan to protect a user's environment and prevent outages: Update domain controllers with a patch released Aug. 11 or later; Find devices that are making vulnerable connections by monitoring event logs; Address noncompliant devices making vulnerable connections; and Enable enforcement mode to address CVE-2020-1472 in your environment. Microsoft issued the first phase of the patch on Aug. 11 to partially mitigate the vulnerability. It plans to issue a second patch Feb. 9, 2021, which will handle the enforcement phase of the update. "The [domain controllers] will now be in enforcement mode regardless of the enforcement mode registry key," Microsoft says. "This requires all Windows and non-Windows devices to use secure [Remote Procedure Call] with Netlogon secure channel or explicitly allow the account by adding an exception for the non-compliant device." ... "An elevation of privilege vulnerability exists when an attacker establishes a vulnerable Netlogon secure channel connection to a domain controller, using the Netlogon Remote Protocol (MS-NRPC). An attacker who successfully exploited the vulnerability could run a specially crafted application on a device on the network," Microsoft says.


War of the AI algorithms: the next evolution of cyber attacks

Over the years, hackers have consistently reinforced the old adage: ‘where there’s a will there’s a way’. Defenders have inputted new rules into their firewalls or developed new detection signatures based on attacks they have seen, and hackers have constantly reoriented their attack methodologies to evade them, leaving organisations playing catch-up and scrambling for a plan B in the face of an attack. A paradigm shift came in 2017 when the destructive ransomware ‘worms’ WannaCry and NotPetya caught the security world unaware, bypassing traditional tools like firewalls to cripple thousands of organisations across 150 countries, including a number of NHS agencies. A critical response to the onset of increasingly sophisticated and novel attacks has been AI-powered defences, a development driven by the philosophy that information about yesterday’s attacks cannot predict tomorrow’s threats. In recent years, thousands of organisations have embraced AI to understand what is ‘normal’ for their digital environment and identify behaviour that is anomalous and potentially threatening. Many have even entrusted machine algorithms to autonomously interrupt fast-moving attacks. This active, defensive use of AI has changed the role of security teams fundamentally, freeing up humans to focus on higher level tasks.


The biggest cyber threats organizations deal with today

“Ransomware criminals are intimately familiar with systems management concepts and the struggles IT departments face. Attack patterns demonstrate that cybercriminals know when there will be change freezes, such as holidays, that will impact an organization’s ability to make changes (such as patching) to harden their networks,” Microsoft explained. “They’re aware of when there are business needs that will make businesses more willing to pay ransoms than take downtime, such as during billing cycles in the health, finance, and legal industries. Targeting networks where critical work was needed during the COVID-19 pandemic, and also specifically attacking remote access devices during a time when unprecedented numbers of people were working remotely, are examples of this level of knowledge.” Some of them have even shortened their in-network dwell time before deploying the ransomware, going from initial entry to ransoming the entire network in less than 45 minutes. Gerrit Lansing, Field CTO, Stealthbits, commented that the speed at which a targeted ransomware attack can happen is really determined by one thing: how quickly an adversary can compromise administrative privileges in Microsoft Active Directory.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford