Daily Tech Digest - September 12, 2020

Women in Fintech: How Open Banking Can Help Address Data Bias

A disturbing recent example is the story of Jamie Heinemeier Hansson, who was granted permission to borrow 20 times less on her Apple Card than her husband David was. This was despite her having a better credit score, as well as the couple filing a joint tax return and having an equal share in their property. The Apple Card incident highlighted that computers are not impartial. Artificial intelligence may well be able to digest vast amounts of information and identify patterns far beyond the capability of humans, but the historical data from which such systems “learn” in order to draw conclusions can be biased, even if it is unintentional. So a system can make a discriminatory decision about a woman’s credit rating due to inherent bias in its training – for example, as women were less likely to have been granted credit, the algorithm continues that pattern – despite having not specifically asked her gender. However, many believe that while technology can perpetuate these biases, it could also be used to address them, particularly in the open banking era. “I genuinely believe technology can level the playing field fundamentally,” says Sam Seaton, CEO of Moneyhub. 


Simplify agile, devops, and ITSM with Jira automations

Jira automations work like other IFTTT algorithms, except they have access to all the underlying data and workflows within Jira Software. A Jira automation trigger can be one of several types, including Jira issue types, sprints, and versions. You can design automations for when team members add or modify Jira issues, when scrum masters start or complete sprints, or when team leads create, update, or release versions. These triggers are highly useful for scrum masters, product owners, and technical leads who want to simplify the work needed to keep Jira updated with high-quality data. Jira automation also supports triggers tied to devops events such as pull requests, builds, branches, commitments, and deployments. These events connect with Bitbucket, GitLab, and Github and update Jira issue or version status based on developer activities performed in version control. More advanced triggers can run on a defined schedule or respond to webhooks. Teams using these two triggers can get very creative with integrating Jira workflows with other tools or automating administrative tasks on a schedule. Once you configure the trigger, you have the option to add more filtering conditions or to branch the flow and support different sets of actions.


How trusted data is driving resilience and transformation beyond Covid-19

Over the next three to five years, most business workflows will be disrupted by the application of data and artificial intelligence (AI). Efficiency will be prioritised because it underpins business survival. If we take power and utilities as an example, we can expect disruption of the billing workflow, call centres, customer onboarding, customer service, and distribution. Document intelligence will also be used to glean insights from large volumes of information. Ultimately, data and AI will reinvent the entire end-to-end value chains of industries. Companies that recognise the strategic value of data will be the leaders in digital transformation, giving them a competitive position in the market. ... The pandemic has highlighted the value of data since having and sharing information on individuals will be key to defeating the virus. So, in the evolving normal, we can expect more data-sharing platforms – platforms that allow the public sector to share information with the private sector and platforms that allow different companies within the private sector to share information with each other. Boundaries between sectors will blur over time and regulation will adapt to accommodate data sharing.


Bluetooth Bug Opens Devices to Man-in-the-Middle Attacks

The Bluetooth SIG is recommending that potentially vulnerable Bluetooth implementations introduce the restrictions on CTKD that have been mandated in Bluetooth Core Specification versions 5.1 and later. These restrictions prevent the overwrite of an authenticated key or a key of a given length with an unauthenticated key or a key of reduced length. “The Bluetooth SIG is also broadly communicating details on this vulnerability and its remedies to our member companies and is encouraging them to rapidly integrate any necessary patches,” according to Bluetooth. “As always, Bluetooth users should ensure they have installed the latest recommended updates from device and operating system manufacturers.” Several Bluetooth-based attacks have cropped up over the past year. In May, academic researchers uncovered security vulnerabilities in Bluetooth Classic that could have allowed attackers to spoof paired devices and capture sensitive data. In February, meanwhile, a critical vulnerability in the Bluetooth implementation on Android devices was discovered that could allow attackers to launch remote code-execution (RCE) attacks – without any user interaction.


Australia’s very small step to make the Internet of Things safer

Security flaws in IoT devices are common. Hackers can exploit those vulnerabilities to take control of devices, steal or change data, and spy on us. In recognition of these risks, the Australian government has introduced a new code of practice to encourage manufacturers to make IoT devices more secure. The code provides guidance on secure passwords, the need for security patches, the protection and deletion of consumers’ personal data and the reporting of vulnerabilities, among other things. The problem is the code is voluntary. Experiences elsewhere, such as the United Kingdom, suggest a voluntary code will be insufficient to deliver the protections consumers need. ... A better option would have been a “co-regulatory” approach. Co-regulation mixes aspects of industry self-regulation with both government regulation and strong community input. It includes laws that create incentives for compliance (and disincentives against non-compliance) and regulatory oversight by an independent (and well-resourced) watchdog. The Australia government has, at least, described its new code of practice as “a first step” to improving the security of IoT devices.


Four ways network traffic analysis benefits security teams

The SecOps team will often need the network data and behavior insights for security analytics or compliance audits. This will usually require network metadata and packet data from physical, virtual and cloud-native elements of the network deployed across the data center, branch offices and multi-cloud environments. The easier it is to access, index and make sense out of this data (preferably in a “single pane of glass” solution), the more value it will provide. Obtaining this insight is entirely feasible but will require a mix of physical and virtual network probes and packet brokers to gather and consolidate data from the various corners of the network to process and deliver it to the security tool stack. NDR solutions can also offer the SecOps team the ability to capture and retain network data associated with indicators of compromise (IOCs) for fast forensics search and analysis in case of an incident. This ability to capture, save, sort and correlate metadata and packets allows SecOps to investigate breaches and incidents after the fact and determine what went wrong, and how the attack can be better recognized and prevented in the future.


A Beginner’s Introduction To DevOps Principles

To put it simply, DevOps is all about integrating these two teams together (hence the portmanteau of a name). It isn’t going to make your developers into sysadmins, or vice versa, but it should help them work together. Each aspect and phase is complemented with tools that make this whole process easier. DevOps is more than just tools and automation, and implementing a set of “DevOps tools” won’t automatically make your team work twice as fast, but these tools are a major part of the process, and it’d be hard to be as efficient without some of them. ... Rather than testing and building only once when everything is finished, in a DevOps environment, each developer will ideally submit changes to source control multiple times a day, whenever issues are complete or a minor milestone is reached. This allows the build and testing phases to start early, and make sure no developer gets too far away from the HEAD of the master source control. This stage is mostly about proper source control management, so having an effective git service like GitHub, Gitlab, or BitBucket are crucial to keeping continuous integration running smoothly. You don’t have to deploy every commit to production right away, but quick automated deployments are a major part of being able to push rapid releases.


It's the biggest job in tech. So why can't they find anyone to do it?

The failure to appoint a senior leader to coordinate the mammoth task of digitizing public services is at odds with the government's rhetoric. Three years ago, the UK re-iterated the need to create a "government as a platform" in a brand-new digital strategy, with the objective of harnessing the potential of digital to improve the efficiency of public services. The goal? To enable "digital by default" across government, and use technology and data to better serve citizens with digitally enabled public services that would be easier, simpler and cheaper. Since then, many reports have emerged stressing the difficulty of achieving this digital transformation journey without proper management from the very top. Last year, for instance, a report from the House of Commons' Science and Technology Committee found that the government's digital momentum was slowing, and that the shift was partly due to a lack of senior leadership. These failures have been especially palpable in the past few months. As the global COVID-19 pandemic threw the world upside down, the need for a government that effectively delivers digital services in a time of crisis became ever-more important.


Visa Warns of Fresh Skimmer Targeting E-Commerce Sites

The Visa alert does not indicate how Baka is initially delivered to a network. But the report notes that the malicious code is hosted on several suspicious domains, including: jquery-cycle[.]com, b-metric[.]com, apienclave[.]com, quicdn[.]com, apisquere[.]com, ordercheck[.]online and pridecdn[.]com. Once the initial infection takes hold, the skimmer is uploaded through the command-and-control server, but the code loads in memory. This means the malware is never present on the targeted e-commerce firm's server or saved to another device, helping it to avoid detection, according to the alert. "The skimming payload decrypts to JavaScript written to resemble code that would be used to render pages dynamically," according to Visa. Once embedded in an e-commerce site's checkout page, the skimmer begins to collect payment and other customer data from various fields and sends the information to the fraudsters' command-and-control server, Visa notes. Once data exfiltration is complete, Baka performs a "clean-up" function that removes the skimming code from the checkout page, according to the alert. This also helps ensure that JavaScript is not spotted by anti-malware tools.


Elon Musk is one step closer to connecting a computer to your brain

While the development of this futuristic-sounding tech is still in its early stages, the presentation was expected to demonstrate the second version of a small, robotic device that inserts tiny electrode threads through the skull and into the brain. Musk said ahead of the event he would “show neurons firing in real-time. The matrix in the matrix.” And he did just that. At the event, Musk showed off several pigs that had prototypes of the neural links implanted in their head, and machinery that was tracking those pigs’ brain activity in real time. The billionaire also announced the Food and Drug Administration had awarded the company a breakthrough device authorization, which can help expedite research on a medical device. Like building underground car tunnels and sending private rockets to Mars, this Musk-backed endeavor is incredibly ambitious, but Neuralink builds on years of research into brain-machine interfaces. A brain-machine interface is technology that allows for a device, like a computer, to interact and communicate with a brain. 




Quote for the day:

"The actions of a responsible executive are contagious." -- Joe D. Batton

Daily Tech Digest - September 11, 2020

How this open source test framework evolves with .NET

Fixie v3 is a work in progress that we intend to release shortly after .NET 5 arrives. .NET 5 is the resolution to the .NET Framework vs. .NET Core development lines, arriving at One .NET. Instead of fighting it, we're following Microsoft's evolution: Fixie v3 will no longer run on the .NET Framework. Removing .NET Framework support allowed us to remove a lot of old, slow implementation details and dramatically simplified the regression testing scenarios we had to consider for reach release. It also allowed us to reconsider our design. The Big Three requirements changed only slightly: .NET Core does away with the notion of an App.config file closely tied to your executable, instead relying on a more convention-based configuration. All of Fixie's assembly-loading requirements remained. More importantly, the circumstances around the design changed in a fundamental way: we were no longer limited to using types available in both .NET Framework and .NET Core. By promising less with the removal of .NET Framework support, we gained new degrees of freedom to modernize the system.


A 5-step Guide to Building Empathy that can Boost your Development Career

When you reflect on yourself, also analyze your interactions. When you speak, do you ramble on? Do you raise your voice easily, or get easily upset? Do you talk more than listen? How do you come across physically? Do you roll your eyes, or dart them around the room? Do you slouch or bury your hands in your pockets? Think about the language you use during conversations. Do you use habitual phrases that help or hinder your message? Is your language helping others to pay attention or tune you out? Does it encourage conversations and build bridges? Are you making others feel heard and respected, or ignored and underappreciated? To start your self-awareness journey, you can take advantage of a number of tools: DISC, Real Colors, and Myers-Briggs are all great starting points to understanding your own personality. These tools are not there to dictate who you are, but to guide you in understanding who you are. When you take the quiz, you are essentially having a conversation with that quiz. The results are simply telling you how you showed up to that conversation - the outcome is affected by your mood, attitude, energy, recent events, etc.


New CDRThief malware targets VoIP softswitches to steal call detail records

"At the time of writing we do not know how the malware is deployed onto compromised devices," Anton Cherepanov, one of ESET's top malware hunters, wrote in an analysis today. "We speculate that attackers might obtain access to the device using a brute-force attack or by exploiting a vulnerability. Such vulnerabilities in VOS2009/VOS3000 have been reported publicly in the past," Cherepanov added. However, once the malware has a foothold on a Linux server running Linknat VOS2009 or VOS3000, the malware searches for the Linknat configuration files and extracts credentials for the built-in MySQL database, where the softswitch stores call detail records (CDR, aka VoIP calls metadata). "Interestingly, the password from the configuration file is stored encrypted," Cherepanov pointed out. "However, Linux/CDRThief malware is still able to read and decrypt it. Thus, the attackers demonstrate deep knowledge of the targeted platform, since the algorithm and encryption keys used are not documented as far as we can tell. It means that the attackers had to reverse engineer platform binaries or otherwise obtain information about the AES encryption algorithm and key used in the Linknat code."


Open-sourcing TensorFlow with DirectML

TensorFlow is a widely used machine learning framework for developing, training, and distributing machine learning models. Machine learning workloads often involve tremendous amounts of computation, especially when training models. Dedicated hardware such as the GPU is often used to accelerate these workloads. TensorFlow can leverage both Central Processing Units (CPUs) and GPUs, but its GPU acceleration is limited to vendor-specific platforms that vary in support for Windows and across its users’ diverse range of hardware. Bringing the full machine learning training capability to Windows, on any GPU, has been a popular request from the Windows developer community. The DirectX platform in Windows has been accelerating games and compute applications on Windows for decades. DirectML extends this platform by providing high-performance implementations of mathematical operations—the building blocks of machine learning—that run on any DirectX 12-capable GPU. We’re bringing high-performance training and inferencing on the breadth of Windows hardware by leveraging DirectML in the TensorFlow framework. 


Developing a plan for remote work security? Here are 6 key considerations

Training needs to address all aspects of your structure, specifically: information security, data security, cybersecurity, computer security, physical security, IoT security, cloud security, and individual security. Each area of an architecture needs to be tested and hardened regularly for your organization to truly be shielded from security breaches. Be specific about your program: train your staff on how to defend your information around your HR records (SSNs, PII, etc.) and data that could be exposed (shopping cart, customer card numbers), as well as in cyber defense to provide tools against nefarious actors, breaches and threats. Staff must be trained to know how to lock down computers, so individual machines and network servers are safe. This training should also encompass how to ensure physical security, to protect your storage or physical assets. This comes into play more as the IoT plays a larger role in connecting our devices and BYOD policies allow for more connections to be made between personal and corporate assets. Individual security: each employee is entitled to be secure in their work for a company, and that includes privacy concerns and compliance issues.


Phishing attack baits victims by promising access to quarantined emails

As analyzed by the Cofense Phishing Defense Center, this phishing attack is directed toward employees within an organization. Impersonating the technical support team of the user's employer, the campaign pretends to have quarantined three email messages, blocking them from reaching the recipient's inbox. Clicking on a link promises access to these messages but instead directs the person to a phishing page. The user is then prompted to sign in with their email account credentials, which are then captured by the attacker. The campaign seems convincing in a variety of ways, according to Cofense. By spoofing the account of the internal support staff, the phishing email appears to come from a trusted source. The quarantine notice sounds real, even claiming that the quarantined messages failed to process and must be reviewed to confirm their validity. Further, the notice has an air of immediacy by saying that two of the messages are considered valid and will be deleted in three days unless action is taken. Such a notice could convince the recipient that these are messages of importance to their organization, requiring a quick response to review them before they're gone.


Laying The Groundwork For ‘Fintech 2.0’ With Digital Assets

Increasingly, government entities are interested in stablecoin technology as well. While it's a promising development in the world of digital assets, Woodford said he doesn't expect state-back initiatives to go live and take off anytime soon. Rather, the biggest value in these efforts is in validating digital assets as a whole. "If you look at what has caused the shift in mentality in the last 12-18 months, it went from, 'No, we don't want this,' to, 'No, but this is interesting' to the point now where it's interesting and people are actively engaging in this space," he explained. "One of the reasons for that is because of the sentiment, caused by those government announcements. It's one driver, but it's more important and meaningful now in terms of how it's adjusted the attitude." The fact is, any dramatic change in the world's payments landscape isn't going to happen overnight — certainly not a shift from fiat currency toward digital assets like bitcoin. It's part of the reason why stablecoin technology is so popular; it's a blend between fiat and digital currency, and that mix is critical to driving traction. As such, Zero Hash, which recently announced the closure of its Series C funding round, is planning to not only augment its lending offering, but to integrate ACH processing capabilities within its infrastructure.


Smart contact lens prototype raises eyebrows

The human iris controls pupil size in response to light, a critical function that allows the retina to take in appropriate sensory information. Too much light and the world is washed out, too little and it's veiled in darkness. A host of eye diseases and deficiencies inhibit the iris from responding appropriately, including aniridia and keratoconus. Light sensitivity, similarly, is a painful debilitation and is often associated with chronic migraine. Researchers at Imec, an innovation hub based in Belgium, along with partners like CMST, a Ghent University-affiliated research group, the Instituto de Investigación Sanitaria Fundación Jiménez Díaz in Madrid, Spain, and Holst Centre have been developing an low-powered wearable solution. The contact lens's iris aperture is tunable thanks to an integrated liquid crystal display (LCD) that manipulates concentric rings.  "By combining our expertise on miniaturized flexible electronics, low-power ASIC design and hybrid integration, we have demonstrated the capacity to develop a solution for people who suffer from iris deficiencies, higher order aberrations and photophobia, a common yet debilitating symptom seen in many neuro-ophthalmic disorders," says researcher prof. Andrés Vásquez


3 tips for supercharging your remote workforce with AI and automation

Organisations today are facing numerous pressures to enable a remote workforce, particularly in the IT function, since we have entered the post-Covid era. At a time when the traditional modus operandi is constantly being tested, there are some ‘new’ approaches that have actually been in use in other parts of the market for a while now. We can take several lessons from the consumer tech world and how it leverages automation and AI to reduce maintenance and ease automation. Let’s take at the Nest thermostat as an example. A single thermostat changes temperature about 1500 times per year, so a large house with 3 thermostats changes temperature about 5000 times per year. ... Make sure you have a single API-endpoint in the cloud to enumerate & automate all of your storage assets on-prem. Having a cloud-managed platform provides the visibility and orchestration of your assets across sites, servers and applications and you can take advantage of a single API in the cloud to then automate all or a portion of those as needed. You get an aggregated view, or you can filter by data centre or application, server group, etc. Then ask interesting questions like, where is there available capacity for a new project?


Plan for change but don’t leave security behind

The best advice is to plan for change – technical, process and culture – but do not, whatever you do, leave security till last. It has to be front and centre of any plans you make. One concrete change that you can make immediately is taking your security people off just “fire-fighting duty”, where they have to react to crises as they come in: businesses can consider how to use them in a more proactive way. People don’t scale, and there’s a global shortage of security experts. So, you need to use the ones that you have as effectively as you can, and, crucially, give them interesting work to do, if you plan to retain them. It’s almost guaranteed that there are ways to extend their security expertise into processes and automation which will benefit your broader teams. At the same time, you can allow those experts to start preparing for new issues that will arise, and investigating new technologies and methodologies which they can then reapply to business processes as they mature. ... One of the main mistakes we see businesses make is attempting to deploy Kubernetes without the appropriate level of in house expertise. Kubernetes is an ecosystem, rather than a one-off executable, that relies on other services provided by open source projects. 



Quote for the day:

"Leadership flows from the minds of followers more than from the titles of leaders, more from the perception of willing followers than from anointment." -- Lane Secretan

Daily Tech Digest - September 10, 2020

Does An Analytics Head Require A Doctoral Degree?

Obviously, researchers in business are not expected to publish papers or guide students as their academic counterparts do. They are looked up to analyze complex business problems methodically as a scientist does. They are expected to make suitable approximations and define some simple parts in the complex whole and attack them using known repeatable, robust principles and techniques. ... Let us say, a large IT services company wants to fill leadership roles in the data science consulting practice. This person should have enough technical depth and the ability to identify the business gaps, communicate with the clients and most importantly build solutions that provide measurable business value (interestingly, this last skill is never considered a core competency in any traditional PhD in AI or other Masters and Bachelors courses). Let us say, an IT product company decides to smarten its application and wants leadership that can take them to the market quickly and profitably. The leaders should have the skill to define the product, design the technicalities, and lead the data science and DevOps teams compassionately and efficiently for rapid design and development. Hence, A leader in data science is not necessarily a technical expert who worked in the company long enough or a business leader who is a taskmaster!


Ripple20 Malware Highlights Industrial Security Challenges

Since availability is critical to ICS systems, and since the systems themselves can be fragile and quirky, these are generally the responsibility of operational technology (OT) teams. The information technology (IT) team usually manages the corporate network. OT employees are familiar with process technology and the systems they manage, but they do not generally know a great deal about information security, which can lead to insecure deployments. One fairly common situation for manufacturers is a divide, sometimes adversarial, between the IT and OT staff within a company.  OT employees do not want the IT staff to tamper with their systems out of fear of downtime that can cost the company. From what we have seen, these relationships often resemble red team versus blue team attitudes at many organizations. The blue team can resent the efforts of the red team because those efforts create more work for the blue team and can be considered a criticism of their work. OT employees also often don't want to consult with their IT counterparts when making arrangements such as remote access, leading to situations such as RDP on control networks commonly being exposed to the public Internet.


India can soon be the tech garage of the world

The government has a crucial role to play in positioning India as the Tech Garage of the World. It should act as a catalyst, and bring together the synergies of the private sector with the aim of innovating for India and the world. It has the potential to provide an enabling environment and a favourable regulatory ecosystem for the development of technology products and provide the size and scale necessary for their rollout. The product development should ideally be undertaken through private entrepreneurship, with the government acting as a facilitator. The key principles of product design should incorporate transparency, security and ease of access. The products must have open architecture, should be portable to any hosting environment and should be available in official and regional languages. The irrevocable shift brought about by covid-19 presents opportunities to develop new technology platforms. In this process, data integrity, authenticity and privacy should be embedded into the design of a product. A balance needs to be struck between regulation and product design through a dynamic collaboration between the government and technology entrepreneurs.


The State of Chatbots: Pandemic Edition

Generally speaking, there are two types of chatbots right now. The first kind is the more primitive kind that is based on simple question and answer rules. This kind is the easiest to deploy quickly, in response to some catastrophic event, like, for instance, a pandemic. It has a scripted set of answers. The problem with this kind of chatbot is that it is very limited, and it can't be enhanced or expanded. It's a one-trick chatbot. "The deterministic-rules based approach chatbots are easy to stand up quickly," Ian Jacobs, a principal analyst at Forrester Research, told InformationWeek. That means there was a huge number of these deployed during the pandemic. "There was an increase in call volume, and you were doing anything you could to get answers to customers without hiring another thousand call center agents," he said. These bots were doing very simple things, but "We are getting to the point where the value that brands are getting out of those very simple bots has already been achieved." One example of this type of bot was deployed by a credit union in the northwestern United States in April when stimulus checks were on the way, Jacobs said. This organization stood up a simple bot designed to answer basic questions that people were asking about the checks.


Digital Transformation Success Elusive For Financial Institutions

When financial institution executives were asked about the importance of alternative digital transformation strategies, improving the overall customer experience was considered to be of high or very high importance by 88% of organizations. The importance of improving the customer experience was followed closely by the need to improve the use of data, AI and advanced analytics (76% rated high or very high). Illustrating the perceived broad scope of digital transformation initiatives at most financial institutions, the majority of the other possible digital transformation strategies were each rated almost identically by financial institution executives in the Digital Banking Report research. Innovation agility, improving marketing and sales, improved efficiency, improved risk management and reducing costs were each rated high or very high by roughly six in ten executives. It is a bit concerning that the need to change the existing business model and transforming legacy core systems were considered the least important strategies despite research that indicates these strategies are of significant importance for transformation success.


Organizations must rethink traditional IT strategy to succeed in the new normal

This newfound self-confidence, combined with IT pros’ achievements during this time, will completely transform how IT is viewed by the business in the future. IT may earn a more prominent voice in the C-suite, as 40% of surveyed IT pros believe they will now be involved in more business-level meetings. Likewise, IT’s role will be up-leveled due to the vast upskilling 26% of IT pros underwent during this experience. With 31% admitting there’s a need to rethink internal processes to better accommodate the rapid change of pace required post-COVID, it’s highly likely a focus on IT pros’ upskilling will continue into the future. “As always, with new responsibilities comes the need for new skills. While almost half of survey respondents felt they received the training required to adapt to changing IT requirements, nearly one-third experienced the opposite, and are at risk of being left behind as IT teams continue to grapple with how best to support the new normal,” said Johnson. IT pros said they’ve gained an increased sense of confidence in their expanded roles, responsibilities, and ability to adapt to unexpected change in the future, despite contending with more challenging working conditions over the course of the pandemic.


Why Linux still needs a flagship distribution

Now, imagine a single distribution has been chosen, from the hundreds of currently available distributions, to represent Linux to hardware manufacturers, vendors, and software companies. That one Linux distribution would be used by hardware manufacturers and software companies to create computers and software guaranteed to run on Linux. That distribution would have only one desktop environment, one package manager, one init system, and the current stable version of the Linux kernel. Users could also download this Linux distribution and use it at will, but the primary purpose of "Flagship Linux" would be to make things easier for manufacturers and developers. Set aside your affinity for the Linux distribution you use and ponder this for a moment: Would you rather argue over which distribution is the best, or would you rather see Linux enjoy massive growth on the desktop and laptop arenas? We've already seen a number of manufacturers start the rollout of preinstalled Linux laptops. Lenovo, Dell, HP are all joining in on the fun, but the process hasn't been easy. As you can see, those manufacturers are, for the most part, all winnowing down the selection of Linux distributions available.


Federated Machine Learning for Loan Risk Prediction

A model is only as strong as the data it’s provided, but what happens when data isn’t readily accessible or contains personally identifying information? In this case, can data owners and data scientists work together to create models on privatized data? Federated learning shows that it is indeed possible to pursue advanced models while still keeping data in the hands of data owners. This new technology is readily applicable to financial services, as banks have extremely sensitive information ranging from transaction history to demographic information for customers. In general, it’s very risky to give data to a third party to perform analytical tasks. However, through federated learning, the data can be kept in the hands of financial institutions and the intellectual property of data scientists can also be preserved. In this article, we will demystify the technology of federated learning and touch upon one of the many use cases in finance: loan risk prediction. Federated Learning, in short, is a method to train machine learning (ML) models securely via decentralization. That is, instead of aggregating all the data necessary to train a model, the model is instead sent to each individual data owner.


How to Protect Chatbots from Machine Learning Attacks

Chatbots are particularly vulnerable to machine learning attacks due to their constant user interactions, which are often completely unsupervised. We spoke to Scanta to get an understanding of the most common cyber attacks that chatbots face. Scanta CTO Anil Kaushik tells us that one of the most common attacks they see are data poisoning attacks through adversarial inputs. Data poisoning is a machine learning attack in which hackers contaminate the training data of a machine learning model. They do this by injecting adversarial inputs, which are purposefully altered data samples meant to trick the system into producing false outputs. Systems that are continuously trained on user-inputted data, like customer service chatbots, are especially vulnerable to these kinds of attacks. Most modern chatbots operate autonomously and answer customer inquiries without human intervention. Often, the conversations between chatbot and user are never monitored unless the query is escalated to a human staff member. This lack of supervision makes chatbots a prime target for hackers to exploit. To help companies protect their chatbots and virtual assistants, Scanta is continuously improving their ML security system, VA Shield.


The Expanding Role of Metadata Management, Data Quality, and Data Governance

After the data has been accurately defined, it is important to put in place procedures to assure the accuracy of the data. Imposing controls on the wrong data does no good at all. Which raises the question: How good is your data quality? Estimates show that, on average, data quality is an overarching industry problem. According to data quality expert Thomas C. Redman, payroll record changes have a 1% error rate; billing records have a 2% to 7% error rate, and; the error rate for credit records: as high as 30%. But what can a DBA do about poor quality data? Data quality is a business responsibility, but the DBA can help by instating technology controls. By building constraints into the database, overall data quality can be improved. This include defining Referential Integrity into the database. Additional constraints should be defined in the database as appropriate to control uniqueness, as well as data value ranges using check constraints and triggers. Another technology tactic that can be deployed to improve data quality is data profiling. Data profiling is the process of examining the existing data in the database and collecting statistics and other information about that data.



Quote for the day:

"Concentrate all your thoughts upon the work in hand. The Sun's rays do not burn until brought to a focus." -- A.G. Bell

Daily Tech Digest - September 09, 2020

Use cases for AI and ML in cyber security

With more employees working from home, and possibly using their personal devices to complete tasks and collaborate with colleagues more often, it’s important to be wary of scams that are afoot within text messages. “With malicious actors recently diversifying their attack vectors, using Covid-19 as bait in SMS phishing scams, organisations are under a lot of pressure to bolster their defences,” said Brian Foster, senior vice-president of product management at MobileIron. “To protect devices and data from these advanced attacks, the use of machine learning in mobile threat defence (MTD) and other forms of managed threat detection continues to evolve as a highly effective security approach. “Machine learning models can be trained to instantly identify and protect against potentially harmful activity, including unknown and zero-day threats that other solutions can’t detect in time. Just as important, when machine learning-based MTD is deployed through a unified endpoint management (UEM) platform, it can augment the foundational security provided by UEM to support a layered enterprise mobile security strategy. “Machine learning is a powerful, yet unobtrusive, technology that continually monitors application and user behaviour over time so it can identify the difference between normal and abnormal behaviour. ...”


Evilnum group targets FinTech firms with new Python-based RAT

The infection chain also adds a rogue scheduled task called “Adobe Update Task", which executes yet another malicious downloader that poses as Adobe's Flash Player and is called Fplayer.exe. This file is a maliciously modified version of Nvidia's Stereoscopic 3D driver Installer. It seems that the Evilnum attackers have gone to great lengths to maintain persistence and stealth by impersonating a variety of legitimate programs that administrators might not find suspicious on a Windows system. The PyVil RAT talks to the command-and-control (C&C) server using HTTP but the data inside is encrypted with a hard-coded key to hide it from network-level Web traffic inspection products. In the past, Evilnum configured its malware to only talk to command-and-control servers using IP addresses, not domain names. However, Cybereason has detected a growing number of domains being associated with the IP addresses used by the Evilnum C&C infrastructure during the past weeks, signaling a change in tactics as well as a growing infrastructure. The researchers also observed PyVil RAT downloading a custom version of an open-source password dumping tool called LaZagne, a post-exploitation tool that's written in Python and is popular with penetration testers. 


Open source data control for cloud services with Apache Ranger

RBAC is based on the concepts of users, roles, groups, and privileges in an organization. Administrators grant privileges or permissions to pre-defined organizational roles—roles that are assigned to subjects or users based on their responsibility or area of expertise. For example, a user who is assigned the role of a manager might have access to a different set of objects and/or is given permission to perform a broader set of actions on them as compared to a user with the assigned role of an analyst. When the user generates a request to access a data object, the access control mechanism evaluates the role assigned to the user and the set of operations this role is authorized to perform on the object before deciding whether to grant or deny the request. RBAC simplifies the administration of data access controls because concepts such as users and roles are well-understood constructs in a majority of organizations. In addition to being based on familiar database concepts, RBAC also offers administrators the flexibility to assign users to various roles, reassign users from one role to another, and grant or revoke permissions as required. Once an RBAC framework is established, the administrator's role is primarily to assign or revoke users to specific roles. 


Using Measurement to Optimise Remote Work

Citrix’s Remote Works Podcast recently interviewed Laura Giurge, a post-doctoral researcher at London Business School and Oxford University’s Wellbeing Research Centre. Giurge explained that the pandemic has created a "big experiment of working from home." She explained that its findings were challenging the traditional assumption that productivity is measured in hours worked, rather than the impact of an employee’s output. Giurge explained that this required a change in mindset and was particularly challenging for traditional managers: It is really hard for managers, if you are really used to seeing your employees in the office and all of a sudden you’re not. It’s very difficult. But if you start from a mindset of experimentation and understanding there are better ways for experimenting with new ways of working and seeing what works, then you are likely to get your employees to work better and also be happier. Longman wrote that he "calculated the average number of stories" completed "during 2019 and used this as a comparison with 2020 data." By examining trends by month and by quarter he wrote that "both views suggested that the work completed during lockdown was within ... expected levels of volatility." 


Data Labeling for Natural Language Processing: a Comprehensive Guide

Once you have identified your training data, the next big decision is in determining how you’d like to label that data. The labels to be applied can lead to completely different algorithms. One team browsing a dataset of receipts may want to focus on the prices of individual items over time and use this to predict future prices. Another may be focused on identifying the store, date and timestamp and understanding purchase patterns. Practitioners will refer to the taxonomy of a label set. What level of granularity is required for this task? Is it enough to understand that a customer is sending in a customer complaint and route the email to the customer support team? Or would you like to specifically understand which product the customer is complaining about? Or even more specifically, whether they are asking for an exchange/refund, complaining of a defect, an issue in shipping, etc.? Note that the more granular the taxonomy you choose, the more training data will be required for the algorithm to adequately train on each individual label; phrased differently, each label requires a sufficient number of examples, so more labels means more labeled data overall.


Chilean bank shuts down all branches following ransomware attack

The incident is currently being investigated as having originated from a malicious Office document received and opened by an employee. The malicious Office file is believed to have installed a backdoor on the bank's network. Investigators believe that on the night between Friday and Saturday, hackers used this backdoor to access the bank's network and install ransomware. Bank employees working weekend shifts discovered the attack when they couldn't access their work files on Saturday. BancoEstado reported the incident to Chilean police, and on the same day, the Chilean government sent out a nationwide cyber-security alert warning about a ransomware campaign targeting the private sector. While initially, the bank hoped to recover from the attack unnoticed, the damage was extensive, according to sources, with the ransomware encrypting the vast majority of internal servers and employee workstations. The bank initially disclosed the attack on Sunday, but as time went by, bank officials realized employees wouldn't be able to work on Monday, and decided to keep branches closed, while they recover. Luckily, it appears the bank had done its job and properly segmented its internal network, which limited what the hackers could encrypt.


How to ensure cybersecurity and business continuity plans align

Ideally, according to industry good practice, a disruptive incident should trigger an IR plan that assesses the damage and initiates steps to respond quickly to the cyber incident. Results of the IR plan can trigger a BC or a DR plan, or both, based on the nature of the event. BC/DR plans recover and restore critical assets -- people, processes, technology and facilities -- the business needs to function. Cybersecurity plans respond to specific disruptive events and may include an IR plan component to determine the nature of the event before launching response activities. The key is to determine at what point the cybersecurity attack threatens the organization and its ability to conduct business. This suggests that descriptive language should be added to cybersecurity plans to trigger IR, as well as BC/DR plans. Let's assume there's a full complement of plans in place that deal with business- and technology-focused incidents. In some cases, only a specific security strategy or plan -- e.g., information security -- will be needed. In other situations, one or more plans may need to be launched. The figure below depicts a simple decision flow diagram showing how such plan linkages may be arranged and launched in response to a cybersecurity attack.


UK tech sector vacancies up 36% during summer — Tech Nation

“Since lockdown, companies have come to realise that they need industrial-grade technology to run their businesses and tech companies are hiring people to service these new customers, expand and build new products,” said Haakon Overli, co-founder of enterprise software-focused venture fund Dawn Capital. “We’re seeing it right across our portfolio.” The Tech Nation research also suggests that recovery from the pandemic is set to be uneven, with industries such as travel and retail predicted to drastically cut their workforces, while others are able to prosper from changes in customer behaviour. Additionally, the report said the trend of remote working will continue to open up “high-paid, quality” opportunities to residents outside larger cities. Recent research from Culture Shift found that culture has improved for tech sector employees while remote working. However, 50% said they feel isolated while working from home. Despite employment in the UK tech sector looking promising due to the surge in vacancies, the skills gap remains an issue, with two thirds of businesses already have unfilled digital skills vacancies, while 58% say they’ll need significantly more digital skills in the next five years, according to the CBI.


Why More Healthcare Providers are Moving to Public Cloud

In what may be one of the hardest truths of this extraordinary time -- apart from the human suffering -- is that the need for dynamic surge capacity will not disappear when a vaccine is available. As the World Economic Forum has said, we have entered a new era where the risk of future pandemics is high. This forever alters the infrastructure needed to support shifting demands on technology. The public cloud offers the systems resilience that healthcare providers need in order to sustain operations under severe disruption, flexing to address highly volatile customer demand and managing vastly increased needs for remote network access. Providers long viewed investing in the public cloud as a risky business because of security concerns. But over the past two years, many have begun their cloud journey buoyed by other industries’ and research institutions’ embrace of its “deny by default” security posture and, most importantly, limitless opportunities for innovation. There could not be a better time for this. An investment in systems resilience via cloud is an investment in business enablement. A resilient technology infrastructure scales up or down on demand based on real-time changes in usage to support care volume variability. It identifies traffic spikes and automatically adjusts capacity to drive responsiveness with new cost efficiencies.


Cybersecurity Skills Gap Worsens, Fueled by Lack of Career Development

The fundamental causes for the skill gap are myriad, starting with a lack of training and career-development opportunities. About 68 percent of the cybersecurity professionals surveyed by ESG/ISSA said they don’t have a well-defined career path, and basic growth activities, such as finding mentor, getting basic cybersecurity certifications, taking on cybersecurity internships and joining a professional organization, are missing steps in their endeavors. The survey also found that many professionals start out in IT, and find themselves working in cybersecurity without a complete skill set. A full 63 percent of respondents in the survey said they’ve worked in cybersecurity for less than three years, with 76 percent starting as IT professionals before switching their career to cybersecurity. “Cybersecurity professionals often muddle through their careers with little direction, jumping from job to job and enhancing their skill sets on the fly rather than in any systematic way,” according to the report. To go along with this, the survey asked respondents to speculate on how long it takes a cybersecurity professional to become proficient at the job. The highest percentage of respondents (39 percent) believe it takes anywhere from three to five years to develop real cybersecurity proficiency, while 22 percent say two to three years, and 18 percent claim it takes more than five years.



Quote for the day:

"It's fine to celebrate success but it is more important to heed the lessons of failure." -- Bill Gates

Daily Tech Digest - September 08, 2020

Closing The Tech Skills Gap: 3 Key Factors For CEOs To Consider

Limited resources and tightened budgets have placed restrictions on hiring new talent and several industries were left scrambling to reskill and quickly adapt. While hiring new talent seems like a valid solution, in reality, the hiring, onboarding and culture development process requires a significant amount of time and dedication, impacting the overall company’s output. As enterprises continue to identify ways to do more with less now is an opportune time for reskilling and upskilling initiatives to become part of the “new norm.”Reskilling and upskilling initiatives are not only beneficial to employees but impactful to the enterprise. According to a recent study, nearly 30% of employees feel their skills will be redundant within the next two years, with 50% of those in Gen-Y and Gen-Z indicating that their skills will be irrelevant within the next four to five years. Although technology tends to create more jobs than it takes away, those fears are still incredibly prevalent. A workforce of the future must be prepared to welcome change and remain agile, they also must have the support and resources to further enhance their skills. Furthermore, employees will find comfort in knowing their company wants to invest in them and their future—and loyalty will likely follow.


Stretch or safe? The art of setting goals for your teams

With so little clarity about the future, how can leaders set business goals for the next six months to a year? During the dozen years between the 2008 financial crisis and the current pandemic, the world seemed far more stable, and budgeting was more of a predictable process. But now? Who knows. We are living in an era of VUCA, an acronym coined by the U.S. Army War College that stands for volatility, uncertainty, complexity, and ambiguity. This uncertainty is raising new challenges for a fundamental leadership skill: goal setting. It is as much an art as a science, because it requires finding the sweet spot between the aspirational and the realistic. Yes, there is something galvanizing and inspirational about a big stretch goal, as President John F. Kennedy knew in 1961, when he announced that the United States would put a man on the moon by the end of the decade, even though the longest time any American had spent in space was barely 15 minutes. The business leader’s job is to set an ambitious target that will bring out the best in a company’s teams and achieve what may seem impossible at first. These are the BHAGs — or big, hairy, audacious goals, in the words of Jim Collins, the author of Good to Great and other books.


Can AI help with your quest for global talent?

For candidates, AI can help to eliminate some of the most problematic human flaws in the recruitment process: hiring bias. Although often unintentional, stereotypes and personal prejudices are something which even the most conscientious recruiters can fall foul of. AI allows for blind applicant screening and levels the playing field. Chatbots can also help to improve the candidate experience and engagement by offering immediate replies to inquiries or queries, simple job applications and ongoing assistance throughout the process. Employers and HR personnel can benefit massively from AI, too. For starters, it can be used to scan CVs for certain keywords to shortlist the most suitable candidates intelligently. Predictive analysis can even determine which candidates are more likely to succeed in the roles — helping to improve the quality of the hire and ensure only the most retainable talents are brought on board. AI can also help companies reach passive candidates who aren’t actively seeking a new role — which can often be one of the best applicant pools. In the past, reaching these candidates involved poring through CV databases, lots of cold-calling and even more dead ends. 


99 Ransomware Problems - and a Decryptor Ain't One

Security experts say that more organizations have been putting in place viable defenses against ransomware, including frequently backing up all systems, and storing those backups offline. As a result, if they suffer a ransomware infection, they can simply wipe systems and restore from backups, without having to even consider paying a ransom. In response, beginning in November 2019, the Maze gang began exfiltrating data before crypto-locking systems, then using the threat of data leaking to try and force more victims to pay. Unfortunately, this strategy not only worked, but has been emulated by numerous other gangs ... Unfortunately, the move to exfiltrate data, name-and-shame victims and so on has been leading to higher profits for criminals. In numerous recent cases, despite being able to fully restore data from backups, victims have then felt "compelled to have to engage in an extortion negotiation and potentially a payment to a threat actor because of the potential for what they deemed to be irreparable harm to their business if the information is leaked, and so they end up paying to prevent that," says Coveware CEO Bill Siegel.


The New Capabilities in Endpoint Security for Businesses

Surprisingly, endpoint security evolved perhaps the most of any branch of cybersecurity. After all, look at the history of these critical business-level solutions. First, the only needed to protect a determined set of physical, on-premises devices from known malware and viruses. A simple antivirus solution could do the trick many times over. However, enterprises face an increasingly complex IT and device environment that in no way resembles ages past. For example, you need to contend with the increased necessity of remote work in the wake of COVID-19; in fact, these changes might result in permanent reassessments of work-from-home policies. That means new endpoints operating on personal Wi-Fi or public Wi-Fi connections, both of which pose cybersecurity challenges in terms of visibility and consistency. Additionally, those endpoints connecting to corporate networks are also undergoing changes. No less an authority than Gartner noted that bring-your-own-devices (BYOD) as a term may not adequately describe the situation. It might more accurately be summarized as Bring-Your-Own-PC (BYOPC), which adds another layer of endpoint security complexity. 


How Diffblue uses AI to automate unit testing for Java applications

The irony, as Diffblue CEO Mathew Lodge pointed out in an interview, is how late the software industry is in embracing AI to improve software development, given how we've used AI to automate and disrupt so many other industries--from retail, travel, transportation, manufacturing, and more. Lodge said Diffblue researchers took advantage of the machine learning strategy that powered AlphaGo, Alphabet subsidiary DeepMind's software program that beat the world champion player of Go. While the company is starting with a Java solution (by far the most popular language in the Global 2000 where companies invest heavily in productivity tools), its technology can also be used to automate testing for most programming languages such as Python, JavaScript, and C#, among others.  Among the first customers to roll out Diffblue's solution is Goldman Sachs (with an annual IT budget larger than many countries' GDP). Using Diffblue's AI on one module with an important backend system, Goldman Sachs was able to expand existing unit test coverage from 36% to 72% in less than 24 hours, a feat that would have required more than eight days of developer time if done manually. Developer time savings? 90%.


The Cloud Is Not The Edge

Over the last 15 years, we have seen major growth in social and mobile categories and SaaS offerings. Most recently, a new technology has emerged called the internet of things (IoT), and it demands a new type of computing called edge computing.  Today, as we shift from doing all processing on Amazon Web Services or Microsoft Azure computers and move it to our businesses, construction zones, farms and trucks, we hear that edge computing will be “bigger than the cloud.” This new type of computing will provide augmented reality for remote service, real-time monitoring of equipment in the field, optimizations for natural resources and machine-learned energy efficiencies, among other returns. While it’s tempting to believe we can just move our cloud applications to the edge, this is not possible. Furthermore, companies that take such a strategy will struggle for years to come because the cloud is fundamentally different from edge computing. ... Edge-native architectures should expect a diverse infrastructure for deployment. This means that edge applications should easily run on bare metal processors, virtual machines and containers. Conversely, cloud offerings and services are built and heavily tuned for a single type of environment and cannot run anywhere.


Low-Code Revolution to Prepare Manufacturers for Industry 4.0

Manufacturing Industry is experiencing a digitization move. Manufacturing Sector has already adopted digital technologies like artificial intelligence, augmented reality, robotics, additive manufacturing, etc. These technologies had enabled them to have a competitive advantage in terms of manufacturing efficiency and cost. Due to pandemic traditional supply chains and manufacturing environments are crumbling, so there is a need to move towards a digitally-driven, more flexible agile approach. In these challenging times, many of the leading companies are innovating and developing their applications. Businesses that tailor their existing technical capability and resources on digital technology can limit the COVID-19 ‘s impact. In times like these when there are limited resources and less time to build applications for business continuity, businesses are relying on Low-Code technology to create and pilot new applications for business continuity at rapid rates. Low Code platforms are becoming popular among manufacturing companies as they deliver customized solutions and offer flexibility, scalability, and efficient technological innovation.


Five lessons for digital transformation success

Developing the right talents and skills is one of the important transformation initiatives. While some people might immediately say digital technologies are the key success factor, those who are experienced in the process would say that’s not necessarily so. Chan Suh, chief digital officer of business transformation specialist Prophet, warns against being seduced by the promises of technology’s magical tools for creating revenue growth. While businesses may need digital innovations such as artificial intelligence for deep insight, tech stacks are just tools and, without the right operating instructions, they either lie fallow or become money pits. Suh says it’s a mistake that has cost global businesses billions of dollars in wasted investments. “We need the conceptual strategies and innovations to guide our tech investments as well as the human expertise to use it properly. However, that human expertise is especially rare when it comes to navigating the highly complicated interdependencies of digitally powered businesses,” he says. With building capability, the key is the right mix of human expertise and technology working in a coherent, flexible operating model with the customer at the centre.


Delivering on your promises

Bertini and Koenigsberg make an impassioned and ambitious case for rewriting the rules of commerce. They argue that although customers want to buy a solution to a “job that needs to be done” (in the words of Clay Christensen), they’re offered only the means to buy that solution, typically by taking ownership of a product. This is due to a “a combination of neglect, inertia, fear of change, and comfort with the status quo” on the part of companies. Buying a product (e.g., an engine) isn’t always a good proxy for the end goal (e.g., reliable high performance). Reserving particular wrath for healthcare, education, and advertising, the authors focus on three forms of waste in the exchange between companies and customers: (1) access — customers can’t get the product (e.g., a car) they want because of the cost or a lack of stock; (2) consumption — they don’t or can’t use what’s offered (e.g., bundles of TV programs or a car that sits unused 90 percent of the time); and (3) performance — the product doesn’t deliver the value customers expect. “Lean commerce,” in which the fortunes of companies depend explicitly on delivering value to the customer, is a much more efficient model. To determine value, the authors use an end or outcome that can be easily understood, verified, and quantified. Feeling happy or amused is hard to measure, but measuring a laugh is easier.



Quote for the day:

"When Things Fall Apart " is when we usually have the most to learn about ourselves." -- Oprah

Daily Tech Digest - September 07, 2020

Brain-Inspired Electronic System Could Make AI 1,000 Times More Energy Efficient

In the new study, published in Nature Communications, engineers at UCL found that accuracy could be greatly improved by getting memristors to work together in several sub-groups of neural networks and averaging their calculations, meaning that flaws in each of the networks could be canceled out. Memristors, described as “resistors with memory,” as they remember the amount of electric charge that flowed through them even after being turned off, were considered revolutionary when they were first built over a decade ago, a “missing link” in electronics to supplement the resistor, capacitor, and inductor. They have since been manufactured commercially in memory devices, but the research team say they could be used to develop AI systems within the next three years. Memristors offer vastly improved efficiency because they operate not just in a binary code of ones and zeros, but at multiple levels between zero and one at the same time, meaning more information can be packed into each bit. Moreover, memristors are often described as a neuromorphic (brain-inspired) form of computing because, like in the brain, processing and memory are implemented in the same adaptive building blocks, in contrast to current computer systems that waste a lot of energy in data movement.


Management skills: Five ways building your network will help you get ahead

Mark Gannon, director of business change and information solutions at Sheffield City Council, says smart digital leaders make sure they carry on learning – even once they get to the very top. Gannon says developing experiences outside the day job has always been important to him, both as full-time CIO and in his stint as a consultant before joining the council. "There's the basic stuff about just getting out there and understanding your customers and spending time to speak with them. Consulting was interesting because it gave me the opportunity to look outside my own experience and see what other organisations were doing. I think it's really important to be constantly learning," he says. Gannon suggests his determination to develop new skills might be something to do with having completed a doctorate prior to joining the IT profession. His interest in education continues to this day – Gannon is a school parent governor. "Being a governor is interesting and getting out and engaging with other networks in the city is something I do a lot. We've developed a cross-community network, called dotSHF, which is about how we bring together the work that's being done by sole traders, and private and public sector organisations around digital," says Gannon.


Telling tales: using behavioural AI to reconstruct attack storylines

Behavioral AI can be used to mitigate automatically—a seriously powerful gamechanger. The technology is capable of making a decision on the device, without relying on the cloud, or on humans, to tell it what to do. Monitoring behaviour is a tricky, complex problem, and you want to feed your algorithm robust, informative, context-rich data which really captures the essence of a program’s execution. To do this, you need to monitor the operating system at a very low level and, most importantly, link individual behaviours together to create full “storylines”. For example, if a program executes another program, or uses the operating system to schedule itself to execute on boot up, you don’t want to consider these different, isolated executions, but a single story. Training AI models on behavioural data is similar to training static models, but with the added complexity of the time dimension. In other words, instead of evaluating all features at once, you need to consider cumulative behaviours up to various points in time. Interestingly, if you have good enough data, you don’t really need an AI model to convict an execution as malicious. For example, if the program starts executing but has no user interaction, then it tries to register itself to start when the machine is booted, then it starts listening to keystrokes, you could say it’s very likely a keylogger and should be stopped. 


Microsoft Updates Edge With Exciting New Features To Beat Chrome

Microsoft’s Edge browser is growing in popularity, reaching the number two position in the desktop browser market, even beating privacy-focused option Firefox. Now Microsoft has just unveiled a bunch of new features that make it a valid alternative to Google Chrome as an increasing number of people work from home. One very useful update which would be great if it comes to fruition was spotted by Windows Latest in the Edge Canary developer build is a new feature called “Web Capture” which allows you to take a screenshot of a webpage—in full or cropped—and copy it to the clipboard or preview it. ... Meanwhile, more new features to boost your security are expected in Edge 86, which is due to drop in the next few weeks, Microsoft has confirmed. This includes new alerts for the Edge password monitor if a compromised password is detected. At the same time, Edge will add the option to show or hide the favorites bar from the favorites management page. Edge will also add policy improvements for enterprises using the browser for various users and applications. Just last week, Microsoft started to roll out Edge 85 with multiple features aiming to help those working from home during the coronavirus pandemic.


How AI will automate cybersecurity in the post-COVID world

At a basic level, AI uses data to make predictions and then automates actions. This automation can be used for good or evil. Cybercriminals take AI designed for legitimate purposes and use it for illegal schemes. Consider one of the most common defenses attempted against credential stuffing – CAPTCHA. Invented a couple of decades ago, CAPTCHA tries to protect against unwanted bots by presenting a challenge (e.g., reading distorted text) that humans should find easy and bots should find difficult. Unfortunately, cybercriminal use of AI has inverted this. Google did a study a few years ago and found that machine-learning based optical character recognition (OCR) technology could solve 99.8% of CAPTCHA challenges. This OCR, as well as other CAPTCHA-solving technology, is weaponized by cybercriminals who include it in their credential stuffing tools. Cybercriminals can use AI in other ways too. AI technology has already been created to make cracking passwords faster, and machine learning can be used to identify good targets for attack, as well as to optimize cybercriminal supply chains and infrastructure. We see incredibly fast response times from cybercriminals, who can shut off and restart attacks with millions of transactions in a matter of minutes.


The Principles of Planning and Implementing Microservices

Each service should have a version, which updates regularly in every release. Versioning allows to identify a service and deploy a specific version of it. It also enables the consumers of the service to be aware when the service has changed, and by that avoid breaking the existing contract and the communication between the services. Different versions of the same service can coexist. With that, the migration from the old version of the new version can be gradual without having too much impact on the whole application. ... In a microservices environment, there are many small services that communicate constantly with each other, so it is easier to get lost in what the service does or how to use its API. Documentation can facilitate that. Keeping valid up-to-date documentation is a tedious and time-consuming task. Naturally, this can be prioritised low in the tasks list of the developer. Therefore, automation is required instead of documenting manually (readme files, notes, procedures). There are various tools to codify and automate tasks to keep the documentation updated while the code continues to change. Tools like Swagger UI or API Blueprint can do the job. They can generate a web UI for your microservices API, which alleviates the orientation efforts. once again, standardization is an advantage; for example, Swagger implements the OpenAPI specification, which is an industry-standard.


How Cybercriminals Take the Fun Out of Gaming

The underground market is also active. In a recent blog, Singer broke down the world of cybercrime in games. "The first thing to understand about the criminals who attack the games industry is that they participate in a working, fluid, day-to-day economy that they manage completely themselves," he wrote. "Cybercriminals have built informal structures that mirror the efficiencies of standard enterprise operations. They have developers, QA folks, middle managers, project managers, salespeople, and even marketing and PR people who hype vendors and products." Austin Francisco, security analyst at Key Cyber Solutions (KCS) – who has "been gaming since the '90s" – says hackers advertise stolen goods and cheats as "a product and not like a hack," offering player values such as the ability to "have 100% accuracy aim" or "see people through walls," for example. Singer doesn't understand the appeal, but "there are enough people who enjoy it that there's a thriving industry," he says. One popular attack is account takeovers (ATO), which is used to steal other players' goods. It's a large market due to the sheer amount of value tied to a player account: from in-game currencies to achievements unlocked to player status and "skins"


“Enterprise-Class Open-Source Data Tools” Is Not an Oxymoron

Open source may bring up pictures of dark alleys and bug-ridden software, but in today’s data-driven world, there’s a new class of solutions. These open-source tools are the basis for inquiries into the deepest complexities of artificial intelligence and big data, designed around the massive data load we create each day. The open-source community works fast, addressing bugs, security loopholes, and the simple need to make streamlined tools for real-time insight. Today’s open-source tools result from years of research and a generation of developers who don’t remember a time when data wasn’t the new oil. Data itself is coming unlocked from previous silos and repositories, existing in a continuous state—data in motion. Leveraging open-source tools allows companies to dream of a reality in which company decisions are data-driven by the second. Every person in the organization has access to the data they need. Enterprises must find open-source tools with layers of capability explicitly designed for their unique data picture. These tools facilitate complex governance without creating pipeline bottlenecks. They provide automated documentation of changes, usage, and authorship.


Threat identification is IT ops' role in SecOps

Identifying important assets helps focus SecOps efforts. Additionally, IT operations teams should base threat identification practices on workflows. The goal is to understand workflows and their properties, as well as the statistical results of valid workflow patterns. IT ops teams can thus recognize the ways in which a workflow deviates from the norm, and potential threats because of this deviation. There are generally two pieces to this process: threat incident logging and tracking, and workflow monitoring for abnormal patterns. Many security threats to IT systems require multiple attempts by the attacker. At least some of these attempts get recognized, reported and logged as violations. However, logging tools often ignore a low volume of incidents. These tools use pattern analysis to indicate an active threat. To help the tools find these patterns, classify threat incidents. For example, a series of incidents from a single location or individual that has rarely generated an incident -- imagine someone entering the wrong password -- is a potential threat indicator. While multiple incidents stemming from one source is suspect, so is a series of incidents generated by different sources. Intruders might try several different IP addresses in an attack, for example. In this example, a pattern of events in the threat incident log will be obvious.


Demystifying Behavior Driven Development with Cucumber-JVM

Keeping aside the fancy terms for end-to-end test writing such as reusability, maintainability, and scalability, I always prefer to have a simple definition for writing them. That is, test cases should be written and arranged in a way that they can run any number of times, in any sequence, and with a variety of different datasets. However, it is not as simple as it sounds. This kind of test writing approach demands different teams to collaborate to discuss product behavior from the very first day. Therefore, Behavior Driven Development is based on a fair collaboration among three amigos (Business Analysts, Developer, and Tester) to its entirety. Intriguingly, the primary reason for the popularity of BDD testing is its non-technical, clear, and concise, plain English [or any other international language of your choice ] language. This way, a business owner can play a significant yet prompt role by specifying the requirement in a language which is understood not just by different teams (developers and testers) but also by the testing framework as well. In our case of Cucumber-JVM, the commonly understandable language is Gherkin, which shapes the overall concept. Gherkin is a language with no technical barriers; 



Quote for the day:

"Hold yourself responsible for a higher standard than anybody expects of you. Never excuse yourself." -- Henry Ward Beecher