Daily Tech Digest - September 11, 2020

How this open source test framework evolves with .NET

Fixie v3 is a work in progress that we intend to release shortly after .NET 5 arrives. .NET 5 is the resolution to the .NET Framework vs. .NET Core development lines, arriving at One .NET. Instead of fighting it, we're following Microsoft's evolution: Fixie v3 will no longer run on the .NET Framework. Removing .NET Framework support allowed us to remove a lot of old, slow implementation details and dramatically simplified the regression testing scenarios we had to consider for reach release. It also allowed us to reconsider our design. The Big Three requirements changed only slightly: .NET Core does away with the notion of an App.config file closely tied to your executable, instead relying on a more convention-based configuration. All of Fixie's assembly-loading requirements remained. More importantly, the circumstances around the design changed in a fundamental way: we were no longer limited to using types available in both .NET Framework and .NET Core. By promising less with the removal of .NET Framework support, we gained new degrees of freedom to modernize the system.


A 5-step Guide to Building Empathy that can Boost your Development Career

When you reflect on yourself, also analyze your interactions. When you speak, do you ramble on? Do you raise your voice easily, or get easily upset? Do you talk more than listen? How do you come across physically? Do you roll your eyes, or dart them around the room? Do you slouch or bury your hands in your pockets? Think about the language you use during conversations. Do you use habitual phrases that help or hinder your message? Is your language helping others to pay attention or tune you out? Does it encourage conversations and build bridges? Are you making others feel heard and respected, or ignored and underappreciated? To start your self-awareness journey, you can take advantage of a number of tools: DISC, Real Colors, and Myers-Briggs are all great starting points to understanding your own personality. These tools are not there to dictate who you are, but to guide you in understanding who you are. When you take the quiz, you are essentially having a conversation with that quiz. The results are simply telling you how you showed up to that conversation - the outcome is affected by your mood, attitude, energy, recent events, etc.


New CDRThief malware targets VoIP softswitches to steal call detail records

"At the time of writing we do not know how the malware is deployed onto compromised devices," Anton Cherepanov, one of ESET's top malware hunters, wrote in an analysis today. "We speculate that attackers might obtain access to the device using a brute-force attack or by exploiting a vulnerability. Such vulnerabilities in VOS2009/VOS3000 have been reported publicly in the past," Cherepanov added. However, once the malware has a foothold on a Linux server running Linknat VOS2009 or VOS3000, the malware searches for the Linknat configuration files and extracts credentials for the built-in MySQL database, where the softswitch stores call detail records (CDR, aka VoIP calls metadata). "Interestingly, the password from the configuration file is stored encrypted," Cherepanov pointed out. "However, Linux/CDRThief malware is still able to read and decrypt it. Thus, the attackers demonstrate deep knowledge of the targeted platform, since the algorithm and encryption keys used are not documented as far as we can tell. It means that the attackers had to reverse engineer platform binaries or otherwise obtain information about the AES encryption algorithm and key used in the Linknat code."


Open-sourcing TensorFlow with DirectML

TensorFlow is a widely used machine learning framework for developing, training, and distributing machine learning models. Machine learning workloads often involve tremendous amounts of computation, especially when training models. Dedicated hardware such as the GPU is often used to accelerate these workloads. TensorFlow can leverage both Central Processing Units (CPUs) and GPUs, but its GPU acceleration is limited to vendor-specific platforms that vary in support for Windows and across its users’ diverse range of hardware. Bringing the full machine learning training capability to Windows, on any GPU, has been a popular request from the Windows developer community. The DirectX platform in Windows has been accelerating games and compute applications on Windows for decades. DirectML extends this platform by providing high-performance implementations of mathematical operations—the building blocks of machine learning—that run on any DirectX 12-capable GPU. We’re bringing high-performance training and inferencing on the breadth of Windows hardware by leveraging DirectML in the TensorFlow framework. 


Developing a plan for remote work security? Here are 6 key considerations

Training needs to address all aspects of your structure, specifically: information security, data security, cybersecurity, computer security, physical security, IoT security, cloud security, and individual security. Each area of an architecture needs to be tested and hardened regularly for your organization to truly be shielded from security breaches. Be specific about your program: train your staff on how to defend your information around your HR records (SSNs, PII, etc.) and data that could be exposed (shopping cart, customer card numbers), as well as in cyber defense to provide tools against nefarious actors, breaches and threats. Staff must be trained to know how to lock down computers, so individual machines and network servers are safe. This training should also encompass how to ensure physical security, to protect your storage or physical assets. This comes into play more as the IoT plays a larger role in connecting our devices and BYOD policies allow for more connections to be made between personal and corporate assets. Individual security: each employee is entitled to be secure in their work for a company, and that includes privacy concerns and compliance issues.


Phishing attack baits victims by promising access to quarantined emails

As analyzed by the Cofense Phishing Defense Center, this phishing attack is directed toward employees within an organization. Impersonating the technical support team of the user's employer, the campaign pretends to have quarantined three email messages, blocking them from reaching the recipient's inbox. Clicking on a link promises access to these messages but instead directs the person to a phishing page. The user is then prompted to sign in with their email account credentials, which are then captured by the attacker. The campaign seems convincing in a variety of ways, according to Cofense. By spoofing the account of the internal support staff, the phishing email appears to come from a trusted source. The quarantine notice sounds real, even claiming that the quarantined messages failed to process and must be reviewed to confirm their validity. Further, the notice has an air of immediacy by saying that two of the messages are considered valid and will be deleted in three days unless action is taken. Such a notice could convince the recipient that these are messages of importance to their organization, requiring a quick response to review them before they're gone.


Laying The Groundwork For ‘Fintech 2.0’ With Digital Assets

Increasingly, government entities are interested in stablecoin technology as well. While it's a promising development in the world of digital assets, Woodford said he doesn't expect state-back initiatives to go live and take off anytime soon. Rather, the biggest value in these efforts is in validating digital assets as a whole. "If you look at what has caused the shift in mentality in the last 12-18 months, it went from, 'No, we don't want this,' to, 'No, but this is interesting' to the point now where it's interesting and people are actively engaging in this space," he explained. "One of the reasons for that is because of the sentiment, caused by those government announcements. It's one driver, but it's more important and meaningful now in terms of how it's adjusted the attitude." The fact is, any dramatic change in the world's payments landscape isn't going to happen overnight — certainly not a shift from fiat currency toward digital assets like bitcoin. It's part of the reason why stablecoin technology is so popular; it's a blend between fiat and digital currency, and that mix is critical to driving traction. As such, Zero Hash, which recently announced the closure of its Series C funding round, is planning to not only augment its lending offering, but to integrate ACH processing capabilities within its infrastructure.


Smart contact lens prototype raises eyebrows

The human iris controls pupil size in response to light, a critical function that allows the retina to take in appropriate sensory information. Too much light and the world is washed out, too little and it's veiled in darkness. A host of eye diseases and deficiencies inhibit the iris from responding appropriately, including aniridia and keratoconus. Light sensitivity, similarly, is a painful debilitation and is often associated with chronic migraine. Researchers at Imec, an innovation hub based in Belgium, along with partners like CMST, a Ghent University-affiliated research group, the Instituto de Investigación Sanitaria Fundación Jiménez Díaz in Madrid, Spain, and Holst Centre have been developing an low-powered wearable solution. The contact lens's iris aperture is tunable thanks to an integrated liquid crystal display (LCD) that manipulates concentric rings.  "By combining our expertise on miniaturized flexible electronics, low-power ASIC design and hybrid integration, we have demonstrated the capacity to develop a solution for people who suffer from iris deficiencies, higher order aberrations and photophobia, a common yet debilitating symptom seen in many neuro-ophthalmic disorders," says researcher prof. Andrés Vásquez


3 tips for supercharging your remote workforce with AI and automation

Organisations today are facing numerous pressures to enable a remote workforce, particularly in the IT function, since we have entered the post-Covid era. At a time when the traditional modus operandi is constantly being tested, there are some ‘new’ approaches that have actually been in use in other parts of the market for a while now. We can take several lessons from the consumer tech world and how it leverages automation and AI to reduce maintenance and ease automation. Let’s take at the Nest thermostat as an example. A single thermostat changes temperature about 1500 times per year, so a large house with 3 thermostats changes temperature about 5000 times per year. ... Make sure you have a single API-endpoint in the cloud to enumerate & automate all of your storage assets on-prem. Having a cloud-managed platform provides the visibility and orchestration of your assets across sites, servers and applications and you can take advantage of a single API in the cloud to then automate all or a portion of those as needed. You get an aggregated view, or you can filter by data centre or application, server group, etc. Then ask interesting questions like, where is there available capacity for a new project?


Plan for change but don’t leave security behind

The best advice is to plan for change – technical, process and culture – but do not, whatever you do, leave security till last. It has to be front and centre of any plans you make. One concrete change that you can make immediately is taking your security people off just “fire-fighting duty”, where they have to react to crises as they come in: businesses can consider how to use them in a more proactive way. People don’t scale, and there’s a global shortage of security experts. So, you need to use the ones that you have as effectively as you can, and, crucially, give them interesting work to do, if you plan to retain them. It’s almost guaranteed that there are ways to extend their security expertise into processes and automation which will benefit your broader teams. At the same time, you can allow those experts to start preparing for new issues that will arise, and investigating new technologies and methodologies which they can then reapply to business processes as they mature. ... One of the main mistakes we see businesses make is attempting to deploy Kubernetes without the appropriate level of in house expertise. Kubernetes is an ecosystem, rather than a one-off executable, that relies on other services provided by open source projects. 



Quote for the day:

"Leadership flows from the minds of followers more than from the titles of leaders, more from the perception of willing followers than from anointment." -- Lane Secretan

Daily Tech Digest - September 10, 2020

Does An Analytics Head Require A Doctoral Degree?

Obviously, researchers in business are not expected to publish papers or guide students as their academic counterparts do. They are looked up to analyze complex business problems methodically as a scientist does. They are expected to make suitable approximations and define some simple parts in the complex whole and attack them using known repeatable, robust principles and techniques. ... Let us say, a large IT services company wants to fill leadership roles in the data science consulting practice. This person should have enough technical depth and the ability to identify the business gaps, communicate with the clients and most importantly build solutions that provide measurable business value (interestingly, this last skill is never considered a core competency in any traditional PhD in AI or other Masters and Bachelors courses). Let us say, an IT product company decides to smarten its application and wants leadership that can take them to the market quickly and profitably. The leaders should have the skill to define the product, design the technicalities, and lead the data science and DevOps teams compassionately and efficiently for rapid design and development. Hence, A leader in data science is not necessarily a technical expert who worked in the company long enough or a business leader who is a taskmaster!


Ripple20 Malware Highlights Industrial Security Challenges

Since availability is critical to ICS systems, and since the systems themselves can be fragile and quirky, these are generally the responsibility of operational technology (OT) teams. The information technology (IT) team usually manages the corporate network. OT employees are familiar with process technology and the systems they manage, but they do not generally know a great deal about information security, which can lead to insecure deployments. One fairly common situation for manufacturers is a divide, sometimes adversarial, between the IT and OT staff within a company.  OT employees do not want the IT staff to tamper with their systems out of fear of downtime that can cost the company. From what we have seen, these relationships often resemble red team versus blue team attitudes at many organizations. The blue team can resent the efforts of the red team because those efforts create more work for the blue team and can be considered a criticism of their work. OT employees also often don't want to consult with their IT counterparts when making arrangements such as remote access, leading to situations such as RDP on control networks commonly being exposed to the public Internet.


India can soon be the tech garage of the world

The government has a crucial role to play in positioning India as the Tech Garage of the World. It should act as a catalyst, and bring together the synergies of the private sector with the aim of innovating for India and the world. It has the potential to provide an enabling environment and a favourable regulatory ecosystem for the development of technology products and provide the size and scale necessary for their rollout. The product development should ideally be undertaken through private entrepreneurship, with the government acting as a facilitator. The key principles of product design should incorporate transparency, security and ease of access. The products must have open architecture, should be portable to any hosting environment and should be available in official and regional languages. The irrevocable shift brought about by covid-19 presents opportunities to develop new technology platforms. In this process, data integrity, authenticity and privacy should be embedded into the design of a product. A balance needs to be struck between regulation and product design through a dynamic collaboration between the government and technology entrepreneurs.


The State of Chatbots: Pandemic Edition

Generally speaking, there are two types of chatbots right now. The first kind is the more primitive kind that is based on simple question and answer rules. This kind is the easiest to deploy quickly, in response to some catastrophic event, like, for instance, a pandemic. It has a scripted set of answers. The problem with this kind of chatbot is that it is very limited, and it can't be enhanced or expanded. It's a one-trick chatbot. "The deterministic-rules based approach chatbots are easy to stand up quickly," Ian Jacobs, a principal analyst at Forrester Research, told InformationWeek. That means there was a huge number of these deployed during the pandemic. "There was an increase in call volume, and you were doing anything you could to get answers to customers without hiring another thousand call center agents," he said. These bots were doing very simple things, but "We are getting to the point where the value that brands are getting out of those very simple bots has already been achieved." One example of this type of bot was deployed by a credit union in the northwestern United States in April when stimulus checks were on the way, Jacobs said. This organization stood up a simple bot designed to answer basic questions that people were asking about the checks.


Digital Transformation Success Elusive For Financial Institutions

When financial institution executives were asked about the importance of alternative digital transformation strategies, improving the overall customer experience was considered to be of high or very high importance by 88% of organizations. The importance of improving the customer experience was followed closely by the need to improve the use of data, AI and advanced analytics (76% rated high or very high). Illustrating the perceived broad scope of digital transformation initiatives at most financial institutions, the majority of the other possible digital transformation strategies were each rated almost identically by financial institution executives in the Digital Banking Report research. Innovation agility, improving marketing and sales, improved efficiency, improved risk management and reducing costs were each rated high or very high by roughly six in ten executives. It is a bit concerning that the need to change the existing business model and transforming legacy core systems were considered the least important strategies despite research that indicates these strategies are of significant importance for transformation success.


Organizations must rethink traditional IT strategy to succeed in the new normal

This newfound self-confidence, combined with IT pros’ achievements during this time, will completely transform how IT is viewed by the business in the future. IT may earn a more prominent voice in the C-suite, as 40% of surveyed IT pros believe they will now be involved in more business-level meetings. Likewise, IT’s role will be up-leveled due to the vast upskilling 26% of IT pros underwent during this experience. With 31% admitting there’s a need to rethink internal processes to better accommodate the rapid change of pace required post-COVID, it’s highly likely a focus on IT pros’ upskilling will continue into the future. “As always, with new responsibilities comes the need for new skills. While almost half of survey respondents felt they received the training required to adapt to changing IT requirements, nearly one-third experienced the opposite, and are at risk of being left behind as IT teams continue to grapple with how best to support the new normal,” said Johnson. IT pros said they’ve gained an increased sense of confidence in their expanded roles, responsibilities, and ability to adapt to unexpected change in the future, despite contending with more challenging working conditions over the course of the pandemic.


Why Linux still needs a flagship distribution

Now, imagine a single distribution has been chosen, from the hundreds of currently available distributions, to represent Linux to hardware manufacturers, vendors, and software companies. That one Linux distribution would be used by hardware manufacturers and software companies to create computers and software guaranteed to run on Linux. That distribution would have only one desktop environment, one package manager, one init system, and the current stable version of the Linux kernel. Users could also download this Linux distribution and use it at will, but the primary purpose of "Flagship Linux" would be to make things easier for manufacturers and developers. Set aside your affinity for the Linux distribution you use and ponder this for a moment: Would you rather argue over which distribution is the best, or would you rather see Linux enjoy massive growth on the desktop and laptop arenas? We've already seen a number of manufacturers start the rollout of preinstalled Linux laptops. Lenovo, Dell, HP are all joining in on the fun, but the process hasn't been easy. As you can see, those manufacturers are, for the most part, all winnowing down the selection of Linux distributions available.


Federated Machine Learning for Loan Risk Prediction

A model is only as strong as the data it’s provided, but what happens when data isn’t readily accessible or contains personally identifying information? In this case, can data owners and data scientists work together to create models on privatized data? Federated learning shows that it is indeed possible to pursue advanced models while still keeping data in the hands of data owners. This new technology is readily applicable to financial services, as banks have extremely sensitive information ranging from transaction history to demographic information for customers. In general, it’s very risky to give data to a third party to perform analytical tasks. However, through federated learning, the data can be kept in the hands of financial institutions and the intellectual property of data scientists can also be preserved. In this article, we will demystify the technology of federated learning and touch upon one of the many use cases in finance: loan risk prediction. Federated Learning, in short, is a method to train machine learning (ML) models securely via decentralization. That is, instead of aggregating all the data necessary to train a model, the model is instead sent to each individual data owner.


How to Protect Chatbots from Machine Learning Attacks

Chatbots are particularly vulnerable to machine learning attacks due to their constant user interactions, which are often completely unsupervised. We spoke to Scanta to get an understanding of the most common cyber attacks that chatbots face. Scanta CTO Anil Kaushik tells us that one of the most common attacks they see are data poisoning attacks through adversarial inputs. Data poisoning is a machine learning attack in which hackers contaminate the training data of a machine learning model. They do this by injecting adversarial inputs, which are purposefully altered data samples meant to trick the system into producing false outputs. Systems that are continuously trained on user-inputted data, like customer service chatbots, are especially vulnerable to these kinds of attacks. Most modern chatbots operate autonomously and answer customer inquiries without human intervention. Often, the conversations between chatbot and user are never monitored unless the query is escalated to a human staff member. This lack of supervision makes chatbots a prime target for hackers to exploit. To help companies protect their chatbots and virtual assistants, Scanta is continuously improving their ML security system, VA Shield.


The Expanding Role of Metadata Management, Data Quality, and Data Governance

After the data has been accurately defined, it is important to put in place procedures to assure the accuracy of the data. Imposing controls on the wrong data does no good at all. Which raises the question: How good is your data quality? Estimates show that, on average, data quality is an overarching industry problem. According to data quality expert Thomas C. Redman, payroll record changes have a 1% error rate; billing records have a 2% to 7% error rate, and; the error rate for credit records: as high as 30%. But what can a DBA do about poor quality data? Data quality is a business responsibility, but the DBA can help by instating technology controls. By building constraints into the database, overall data quality can be improved. This include defining Referential Integrity into the database. Additional constraints should be defined in the database as appropriate to control uniqueness, as well as data value ranges using check constraints and triggers. Another technology tactic that can be deployed to improve data quality is data profiling. Data profiling is the process of examining the existing data in the database and collecting statistics and other information about that data.



Quote for the day:

"Concentrate all your thoughts upon the work in hand. The Sun's rays do not burn until brought to a focus." -- A.G. Bell

Daily Tech Digest - September 09, 2020

Use cases for AI and ML in cyber security

With more employees working from home, and possibly using their personal devices to complete tasks and collaborate with colleagues more often, it’s important to be wary of scams that are afoot within text messages. “With malicious actors recently diversifying their attack vectors, using Covid-19 as bait in SMS phishing scams, organisations are under a lot of pressure to bolster their defences,” said Brian Foster, senior vice-president of product management at MobileIron. “To protect devices and data from these advanced attacks, the use of machine learning in mobile threat defence (MTD) and other forms of managed threat detection continues to evolve as a highly effective security approach. “Machine learning models can be trained to instantly identify and protect against potentially harmful activity, including unknown and zero-day threats that other solutions can’t detect in time. Just as important, when machine learning-based MTD is deployed through a unified endpoint management (UEM) platform, it can augment the foundational security provided by UEM to support a layered enterprise mobile security strategy. “Machine learning is a powerful, yet unobtrusive, technology that continually monitors application and user behaviour over time so it can identify the difference between normal and abnormal behaviour. ...”


Evilnum group targets FinTech firms with new Python-based RAT

The infection chain also adds a rogue scheduled task called “Adobe Update Task", which executes yet another malicious downloader that poses as Adobe's Flash Player and is called Fplayer.exe. This file is a maliciously modified version of Nvidia's Stereoscopic 3D driver Installer. It seems that the Evilnum attackers have gone to great lengths to maintain persistence and stealth by impersonating a variety of legitimate programs that administrators might not find suspicious on a Windows system. The PyVil RAT talks to the command-and-control (C&C) server using HTTP but the data inside is encrypted with a hard-coded key to hide it from network-level Web traffic inspection products. In the past, Evilnum configured its malware to only talk to command-and-control servers using IP addresses, not domain names. However, Cybereason has detected a growing number of domains being associated with the IP addresses used by the Evilnum C&C infrastructure during the past weeks, signaling a change in tactics as well as a growing infrastructure. The researchers also observed PyVil RAT downloading a custom version of an open-source password dumping tool called LaZagne, a post-exploitation tool that's written in Python and is popular with penetration testers. 


Open source data control for cloud services with Apache Ranger

RBAC is based on the concepts of users, roles, groups, and privileges in an organization. Administrators grant privileges or permissions to pre-defined organizational roles—roles that are assigned to subjects or users based on their responsibility or area of expertise. For example, a user who is assigned the role of a manager might have access to a different set of objects and/or is given permission to perform a broader set of actions on them as compared to a user with the assigned role of an analyst. When the user generates a request to access a data object, the access control mechanism evaluates the role assigned to the user and the set of operations this role is authorized to perform on the object before deciding whether to grant or deny the request. RBAC simplifies the administration of data access controls because concepts such as users and roles are well-understood constructs in a majority of organizations. In addition to being based on familiar database concepts, RBAC also offers administrators the flexibility to assign users to various roles, reassign users from one role to another, and grant or revoke permissions as required. Once an RBAC framework is established, the administrator's role is primarily to assign or revoke users to specific roles. 


Using Measurement to Optimise Remote Work

Citrix’s Remote Works Podcast recently interviewed Laura Giurge, a post-doctoral researcher at London Business School and Oxford University’s Wellbeing Research Centre. Giurge explained that the pandemic has created a "big experiment of working from home." She explained that its findings were challenging the traditional assumption that productivity is measured in hours worked, rather than the impact of an employee’s output. Giurge explained that this required a change in mindset and was particularly challenging for traditional managers: It is really hard for managers, if you are really used to seeing your employees in the office and all of a sudden you’re not. It’s very difficult. But if you start from a mindset of experimentation and understanding there are better ways for experimenting with new ways of working and seeing what works, then you are likely to get your employees to work better and also be happier. Longman wrote that he "calculated the average number of stories" completed "during 2019 and used this as a comparison with 2020 data." By examining trends by month and by quarter he wrote that "both views suggested that the work completed during lockdown was within ... expected levels of volatility." 


Data Labeling for Natural Language Processing: a Comprehensive Guide

Once you have identified your training data, the next big decision is in determining how you’d like to label that data. The labels to be applied can lead to completely different algorithms. One team browsing a dataset of receipts may want to focus on the prices of individual items over time and use this to predict future prices. Another may be focused on identifying the store, date and timestamp and understanding purchase patterns. Practitioners will refer to the taxonomy of a label set. What level of granularity is required for this task? Is it enough to understand that a customer is sending in a customer complaint and route the email to the customer support team? Or would you like to specifically understand which product the customer is complaining about? Or even more specifically, whether they are asking for an exchange/refund, complaining of a defect, an issue in shipping, etc.? Note that the more granular the taxonomy you choose, the more training data will be required for the algorithm to adequately train on each individual label; phrased differently, each label requires a sufficient number of examples, so more labels means more labeled data overall.


Chilean bank shuts down all branches following ransomware attack

The incident is currently being investigated as having originated from a malicious Office document received and opened by an employee. The malicious Office file is believed to have installed a backdoor on the bank's network. Investigators believe that on the night between Friday and Saturday, hackers used this backdoor to access the bank's network and install ransomware. Bank employees working weekend shifts discovered the attack when they couldn't access their work files on Saturday. BancoEstado reported the incident to Chilean police, and on the same day, the Chilean government sent out a nationwide cyber-security alert warning about a ransomware campaign targeting the private sector. While initially, the bank hoped to recover from the attack unnoticed, the damage was extensive, according to sources, with the ransomware encrypting the vast majority of internal servers and employee workstations. The bank initially disclosed the attack on Sunday, but as time went by, bank officials realized employees wouldn't be able to work on Monday, and decided to keep branches closed, while they recover. Luckily, it appears the bank had done its job and properly segmented its internal network, which limited what the hackers could encrypt.


How to ensure cybersecurity and business continuity plans align

Ideally, according to industry good practice, a disruptive incident should trigger an IR plan that assesses the damage and initiates steps to respond quickly to the cyber incident. Results of the IR plan can trigger a BC or a DR plan, or both, based on the nature of the event. BC/DR plans recover and restore critical assets -- people, processes, technology and facilities -- the business needs to function. Cybersecurity plans respond to specific disruptive events and may include an IR plan component to determine the nature of the event before launching response activities. The key is to determine at what point the cybersecurity attack threatens the organization and its ability to conduct business. This suggests that descriptive language should be added to cybersecurity plans to trigger IR, as well as BC/DR plans. Let's assume there's a full complement of plans in place that deal with business- and technology-focused incidents. In some cases, only a specific security strategy or plan -- e.g., information security -- will be needed. In other situations, one or more plans may need to be launched. The figure below depicts a simple decision flow diagram showing how such plan linkages may be arranged and launched in response to a cybersecurity attack.


UK tech sector vacancies up 36% during summer — Tech Nation

“Since lockdown, companies have come to realise that they need industrial-grade technology to run their businesses and tech companies are hiring people to service these new customers, expand and build new products,” said Haakon Overli, co-founder of enterprise software-focused venture fund Dawn Capital. “We’re seeing it right across our portfolio.” The Tech Nation research also suggests that recovery from the pandemic is set to be uneven, with industries such as travel and retail predicted to drastically cut their workforces, while others are able to prosper from changes in customer behaviour. Additionally, the report said the trend of remote working will continue to open up “high-paid, quality” opportunities to residents outside larger cities. Recent research from Culture Shift found that culture has improved for tech sector employees while remote working. However, 50% said they feel isolated while working from home. Despite employment in the UK tech sector looking promising due to the surge in vacancies, the skills gap remains an issue, with two thirds of businesses already have unfilled digital skills vacancies, while 58% say they’ll need significantly more digital skills in the next five years, according to the CBI.


Why More Healthcare Providers are Moving to Public Cloud

In what may be one of the hardest truths of this extraordinary time -- apart from the human suffering -- is that the need for dynamic surge capacity will not disappear when a vaccine is available. As the World Economic Forum has said, we have entered a new era where the risk of future pandemics is high. This forever alters the infrastructure needed to support shifting demands on technology. The public cloud offers the systems resilience that healthcare providers need in order to sustain operations under severe disruption, flexing to address highly volatile customer demand and managing vastly increased needs for remote network access. Providers long viewed investing in the public cloud as a risky business because of security concerns. But over the past two years, many have begun their cloud journey buoyed by other industries’ and research institutions’ embrace of its “deny by default” security posture and, most importantly, limitless opportunities for innovation. There could not be a better time for this. An investment in systems resilience via cloud is an investment in business enablement. A resilient technology infrastructure scales up or down on demand based on real-time changes in usage to support care volume variability. It identifies traffic spikes and automatically adjusts capacity to drive responsiveness with new cost efficiencies.


Cybersecurity Skills Gap Worsens, Fueled by Lack of Career Development

The fundamental causes for the skill gap are myriad, starting with a lack of training and career-development opportunities. About 68 percent of the cybersecurity professionals surveyed by ESG/ISSA said they don’t have a well-defined career path, and basic growth activities, such as finding mentor, getting basic cybersecurity certifications, taking on cybersecurity internships and joining a professional organization, are missing steps in their endeavors. The survey also found that many professionals start out in IT, and find themselves working in cybersecurity without a complete skill set. A full 63 percent of respondents in the survey said they’ve worked in cybersecurity for less than three years, with 76 percent starting as IT professionals before switching their career to cybersecurity. “Cybersecurity professionals often muddle through their careers with little direction, jumping from job to job and enhancing their skill sets on the fly rather than in any systematic way,” according to the report. To go along with this, the survey asked respondents to speculate on how long it takes a cybersecurity professional to become proficient at the job. The highest percentage of respondents (39 percent) believe it takes anywhere from three to five years to develop real cybersecurity proficiency, while 22 percent say two to three years, and 18 percent claim it takes more than five years.



Quote for the day:

"It's fine to celebrate success but it is more important to heed the lessons of failure." -- Bill Gates

Daily Tech Digest - September 08, 2020

Closing The Tech Skills Gap: 3 Key Factors For CEOs To Consider

Limited resources and tightened budgets have placed restrictions on hiring new talent and several industries were left scrambling to reskill and quickly adapt. While hiring new talent seems like a valid solution, in reality, the hiring, onboarding and culture development process requires a significant amount of time and dedication, impacting the overall company’s output. As enterprises continue to identify ways to do more with less now is an opportune time for reskilling and upskilling initiatives to become part of the “new norm.”Reskilling and upskilling initiatives are not only beneficial to employees but impactful to the enterprise. According to a recent study, nearly 30% of employees feel their skills will be redundant within the next two years, with 50% of those in Gen-Y and Gen-Z indicating that their skills will be irrelevant within the next four to five years. Although technology tends to create more jobs than it takes away, those fears are still incredibly prevalent. A workforce of the future must be prepared to welcome change and remain agile, they also must have the support and resources to further enhance their skills. Furthermore, employees will find comfort in knowing their company wants to invest in them and their future—and loyalty will likely follow.


Stretch or safe? The art of setting goals for your teams

With so little clarity about the future, how can leaders set business goals for the next six months to a year? During the dozen years between the 2008 financial crisis and the current pandemic, the world seemed far more stable, and budgeting was more of a predictable process. But now? Who knows. We are living in an era of VUCA, an acronym coined by the U.S. Army War College that stands for volatility, uncertainty, complexity, and ambiguity. This uncertainty is raising new challenges for a fundamental leadership skill: goal setting. It is as much an art as a science, because it requires finding the sweet spot between the aspirational and the realistic. Yes, there is something galvanizing and inspirational about a big stretch goal, as President John F. Kennedy knew in 1961, when he announced that the United States would put a man on the moon by the end of the decade, even though the longest time any American had spent in space was barely 15 minutes. The business leader’s job is to set an ambitious target that will bring out the best in a company’s teams and achieve what may seem impossible at first. These are the BHAGs — or big, hairy, audacious goals, in the words of Jim Collins, the author of Good to Great and other books.


Can AI help with your quest for global talent?

For candidates, AI can help to eliminate some of the most problematic human flaws in the recruitment process: hiring bias. Although often unintentional, stereotypes and personal prejudices are something which even the most conscientious recruiters can fall foul of. AI allows for blind applicant screening and levels the playing field. Chatbots can also help to improve the candidate experience and engagement by offering immediate replies to inquiries or queries, simple job applications and ongoing assistance throughout the process. Employers and HR personnel can benefit massively from AI, too. For starters, it can be used to scan CVs for certain keywords to shortlist the most suitable candidates intelligently. Predictive analysis can even determine which candidates are more likely to succeed in the roles — helping to improve the quality of the hire and ensure only the most retainable talents are brought on board. AI can also help companies reach passive candidates who aren’t actively seeking a new role — which can often be one of the best applicant pools. In the past, reaching these candidates involved poring through CV databases, lots of cold-calling and even more dead ends. 


99 Ransomware Problems - and a Decryptor Ain't One

Security experts say that more organizations have been putting in place viable defenses against ransomware, including frequently backing up all systems, and storing those backups offline. As a result, if they suffer a ransomware infection, they can simply wipe systems and restore from backups, without having to even consider paying a ransom. In response, beginning in November 2019, the Maze gang began exfiltrating data before crypto-locking systems, then using the threat of data leaking to try and force more victims to pay. Unfortunately, this strategy not only worked, but has been emulated by numerous other gangs ... Unfortunately, the move to exfiltrate data, name-and-shame victims and so on has been leading to higher profits for criminals. In numerous recent cases, despite being able to fully restore data from backups, victims have then felt "compelled to have to engage in an extortion negotiation and potentially a payment to a threat actor because of the potential for what they deemed to be irreparable harm to their business if the information is leaked, and so they end up paying to prevent that," says Coveware CEO Bill Siegel.


The New Capabilities in Endpoint Security for Businesses

Surprisingly, endpoint security evolved perhaps the most of any branch of cybersecurity. After all, look at the history of these critical business-level solutions. First, the only needed to protect a determined set of physical, on-premises devices from known malware and viruses. A simple antivirus solution could do the trick many times over. However, enterprises face an increasingly complex IT and device environment that in no way resembles ages past. For example, you need to contend with the increased necessity of remote work in the wake of COVID-19; in fact, these changes might result in permanent reassessments of work-from-home policies. That means new endpoints operating on personal Wi-Fi or public Wi-Fi connections, both of which pose cybersecurity challenges in terms of visibility and consistency. Additionally, those endpoints connecting to corporate networks are also undergoing changes. No less an authority than Gartner noted that bring-your-own-devices (BYOD) as a term may not adequately describe the situation. It might more accurately be summarized as Bring-Your-Own-PC (BYOPC), which adds another layer of endpoint security complexity. 


How Diffblue uses AI to automate unit testing for Java applications

The irony, as Diffblue CEO Mathew Lodge pointed out in an interview, is how late the software industry is in embracing AI to improve software development, given how we've used AI to automate and disrupt so many other industries--from retail, travel, transportation, manufacturing, and more. Lodge said Diffblue researchers took advantage of the machine learning strategy that powered AlphaGo, Alphabet subsidiary DeepMind's software program that beat the world champion player of Go. While the company is starting with a Java solution (by far the most popular language in the Global 2000 where companies invest heavily in productivity tools), its technology can also be used to automate testing for most programming languages such as Python, JavaScript, and C#, among others.  Among the first customers to roll out Diffblue's solution is Goldman Sachs (with an annual IT budget larger than many countries' GDP). Using Diffblue's AI on one module with an important backend system, Goldman Sachs was able to expand existing unit test coverage from 36% to 72% in less than 24 hours, a feat that would have required more than eight days of developer time if done manually. Developer time savings? 90%.


The Cloud Is Not The Edge

Over the last 15 years, we have seen major growth in social and mobile categories and SaaS offerings. Most recently, a new technology has emerged called the internet of things (IoT), and it demands a new type of computing called edge computing.  Today, as we shift from doing all processing on Amazon Web Services or Microsoft Azure computers and move it to our businesses, construction zones, farms and trucks, we hear that edge computing will be “bigger than the cloud.” This new type of computing will provide augmented reality for remote service, real-time monitoring of equipment in the field, optimizations for natural resources and machine-learned energy efficiencies, among other returns. While it’s tempting to believe we can just move our cloud applications to the edge, this is not possible. Furthermore, companies that take such a strategy will struggle for years to come because the cloud is fundamentally different from edge computing. ... Edge-native architectures should expect a diverse infrastructure for deployment. This means that edge applications should easily run on bare metal processors, virtual machines and containers. Conversely, cloud offerings and services are built and heavily tuned for a single type of environment and cannot run anywhere.


Low-Code Revolution to Prepare Manufacturers for Industry 4.0

Manufacturing Industry is experiencing a digitization move. Manufacturing Sector has already adopted digital technologies like artificial intelligence, augmented reality, robotics, additive manufacturing, etc. These technologies had enabled them to have a competitive advantage in terms of manufacturing efficiency and cost. Due to pandemic traditional supply chains and manufacturing environments are crumbling, so there is a need to move towards a digitally-driven, more flexible agile approach. In these challenging times, many of the leading companies are innovating and developing their applications. Businesses that tailor their existing technical capability and resources on digital technology can limit the COVID-19 ‘s impact. In times like these when there are limited resources and less time to build applications for business continuity, businesses are relying on Low-Code technology to create and pilot new applications for business continuity at rapid rates. Low Code platforms are becoming popular among manufacturing companies as they deliver customized solutions and offer flexibility, scalability, and efficient technological innovation.


Five lessons for digital transformation success

Developing the right talents and skills is one of the important transformation initiatives. While some people might immediately say digital technologies are the key success factor, those who are experienced in the process would say that’s not necessarily so. Chan Suh, chief digital officer of business transformation specialist Prophet, warns against being seduced by the promises of technology’s magical tools for creating revenue growth. While businesses may need digital innovations such as artificial intelligence for deep insight, tech stacks are just tools and, without the right operating instructions, they either lie fallow or become money pits. Suh says it’s a mistake that has cost global businesses billions of dollars in wasted investments. “We need the conceptual strategies and innovations to guide our tech investments as well as the human expertise to use it properly. However, that human expertise is especially rare when it comes to navigating the highly complicated interdependencies of digitally powered businesses,” he says. With building capability, the key is the right mix of human expertise and technology working in a coherent, flexible operating model with the customer at the centre.


Delivering on your promises

Bertini and Koenigsberg make an impassioned and ambitious case for rewriting the rules of commerce. They argue that although customers want to buy a solution to a “job that needs to be done” (in the words of Clay Christensen), they’re offered only the means to buy that solution, typically by taking ownership of a product. This is due to a “a combination of neglect, inertia, fear of change, and comfort with the status quo” on the part of companies. Buying a product (e.g., an engine) isn’t always a good proxy for the end goal (e.g., reliable high performance). Reserving particular wrath for healthcare, education, and advertising, the authors focus on three forms of waste in the exchange between companies and customers: (1) access — customers can’t get the product (e.g., a car) they want because of the cost or a lack of stock; (2) consumption — they don’t or can’t use what’s offered (e.g., bundles of TV programs or a car that sits unused 90 percent of the time); and (3) performance — the product doesn’t deliver the value customers expect. “Lean commerce,” in which the fortunes of companies depend explicitly on delivering value to the customer, is a much more efficient model. To determine value, the authors use an end or outcome that can be easily understood, verified, and quantified. Feeling happy or amused is hard to measure, but measuring a laugh is easier.



Quote for the day:

"When Things Fall Apart " is when we usually have the most to learn about ourselves." -- Oprah

Daily Tech Digest - September 07, 2020

Brain-Inspired Electronic System Could Make AI 1,000 Times More Energy Efficient

In the new study, published in Nature Communications, engineers at UCL found that accuracy could be greatly improved by getting memristors to work together in several sub-groups of neural networks and averaging their calculations, meaning that flaws in each of the networks could be canceled out. Memristors, described as “resistors with memory,” as they remember the amount of electric charge that flowed through them even after being turned off, were considered revolutionary when they were first built over a decade ago, a “missing link” in electronics to supplement the resistor, capacitor, and inductor. They have since been manufactured commercially in memory devices, but the research team say they could be used to develop AI systems within the next three years. Memristors offer vastly improved efficiency because they operate not just in a binary code of ones and zeros, but at multiple levels between zero and one at the same time, meaning more information can be packed into each bit. Moreover, memristors are often described as a neuromorphic (brain-inspired) form of computing because, like in the brain, processing and memory are implemented in the same adaptive building blocks, in contrast to current computer systems that waste a lot of energy in data movement.


Management skills: Five ways building your network will help you get ahead

Mark Gannon, director of business change and information solutions at Sheffield City Council, says smart digital leaders make sure they carry on learning – even once they get to the very top. Gannon says developing experiences outside the day job has always been important to him, both as full-time CIO and in his stint as a consultant before joining the council. "There's the basic stuff about just getting out there and understanding your customers and spending time to speak with them. Consulting was interesting because it gave me the opportunity to look outside my own experience and see what other organisations were doing. I think it's really important to be constantly learning," he says. Gannon suggests his determination to develop new skills might be something to do with having completed a doctorate prior to joining the IT profession. His interest in education continues to this day – Gannon is a school parent governor. "Being a governor is interesting and getting out and engaging with other networks in the city is something I do a lot. We've developed a cross-community network, called dotSHF, which is about how we bring together the work that's being done by sole traders, and private and public sector organisations around digital," says Gannon.


Telling tales: using behavioural AI to reconstruct attack storylines

Behavioral AI can be used to mitigate automatically—a seriously powerful gamechanger. The technology is capable of making a decision on the device, without relying on the cloud, or on humans, to tell it what to do. Monitoring behaviour is a tricky, complex problem, and you want to feed your algorithm robust, informative, context-rich data which really captures the essence of a program’s execution. To do this, you need to monitor the operating system at a very low level and, most importantly, link individual behaviours together to create full “storylines”. For example, if a program executes another program, or uses the operating system to schedule itself to execute on boot up, you don’t want to consider these different, isolated executions, but a single story. Training AI models on behavioural data is similar to training static models, but with the added complexity of the time dimension. In other words, instead of evaluating all features at once, you need to consider cumulative behaviours up to various points in time. Interestingly, if you have good enough data, you don’t really need an AI model to convict an execution as malicious. For example, if the program starts executing but has no user interaction, then it tries to register itself to start when the machine is booted, then it starts listening to keystrokes, you could say it’s very likely a keylogger and should be stopped. 


Microsoft Updates Edge With Exciting New Features To Beat Chrome

Microsoft’s Edge browser is growing in popularity, reaching the number two position in the desktop browser market, even beating privacy-focused option Firefox. Now Microsoft has just unveiled a bunch of new features that make it a valid alternative to Google Chrome as an increasing number of people work from home. One very useful update which would be great if it comes to fruition was spotted by Windows Latest in the Edge Canary developer build is a new feature called “Web Capture” which allows you to take a screenshot of a webpage—in full or cropped—and copy it to the clipboard or preview it. ... Meanwhile, more new features to boost your security are expected in Edge 86, which is due to drop in the next few weeks, Microsoft has confirmed. This includes new alerts for the Edge password monitor if a compromised password is detected. At the same time, Edge will add the option to show or hide the favorites bar from the favorites management page. Edge will also add policy improvements for enterprises using the browser for various users and applications. Just last week, Microsoft started to roll out Edge 85 with multiple features aiming to help those working from home during the coronavirus pandemic.


How AI will automate cybersecurity in the post-COVID world

At a basic level, AI uses data to make predictions and then automates actions. This automation can be used for good or evil. Cybercriminals take AI designed for legitimate purposes and use it for illegal schemes. Consider one of the most common defenses attempted against credential stuffing – CAPTCHA. Invented a couple of decades ago, CAPTCHA tries to protect against unwanted bots by presenting a challenge (e.g., reading distorted text) that humans should find easy and bots should find difficult. Unfortunately, cybercriminal use of AI has inverted this. Google did a study a few years ago and found that machine-learning based optical character recognition (OCR) technology could solve 99.8% of CAPTCHA challenges. This OCR, as well as other CAPTCHA-solving technology, is weaponized by cybercriminals who include it in their credential stuffing tools. Cybercriminals can use AI in other ways too. AI technology has already been created to make cracking passwords faster, and machine learning can be used to identify good targets for attack, as well as to optimize cybercriminal supply chains and infrastructure. We see incredibly fast response times from cybercriminals, who can shut off and restart attacks with millions of transactions in a matter of minutes.


The Principles of Planning and Implementing Microservices

Each service should have a version, which updates regularly in every release. Versioning allows to identify a service and deploy a specific version of it. It also enables the consumers of the service to be aware when the service has changed, and by that avoid breaking the existing contract and the communication between the services. Different versions of the same service can coexist. With that, the migration from the old version of the new version can be gradual without having too much impact on the whole application. ... In a microservices environment, there are many small services that communicate constantly with each other, so it is easier to get lost in what the service does or how to use its API. Documentation can facilitate that. Keeping valid up-to-date documentation is a tedious and time-consuming task. Naturally, this can be prioritised low in the tasks list of the developer. Therefore, automation is required instead of documenting manually (readme files, notes, procedures). There are various tools to codify and automate tasks to keep the documentation updated while the code continues to change. Tools like Swagger UI or API Blueprint can do the job. They can generate a web UI for your microservices API, which alleviates the orientation efforts. once again, standardization is an advantage; for example, Swagger implements the OpenAPI specification, which is an industry-standard.


How Cybercriminals Take the Fun Out of Gaming

The underground market is also active. In a recent blog, Singer broke down the world of cybercrime in games. "The first thing to understand about the criminals who attack the games industry is that they participate in a working, fluid, day-to-day economy that they manage completely themselves," he wrote. "Cybercriminals have built informal structures that mirror the efficiencies of standard enterprise operations. They have developers, QA folks, middle managers, project managers, salespeople, and even marketing and PR people who hype vendors and products." Austin Francisco, security analyst at Key Cyber Solutions (KCS) – who has "been gaming since the '90s" – says hackers advertise stolen goods and cheats as "a product and not like a hack," offering player values such as the ability to "have 100% accuracy aim" or "see people through walls," for example. Singer doesn't understand the appeal, but "there are enough people who enjoy it that there's a thriving industry," he says. One popular attack is account takeovers (ATO), which is used to steal other players' goods. It's a large market due to the sheer amount of value tied to a player account: from in-game currencies to achievements unlocked to player status and "skins"


“Enterprise-Class Open-Source Data Tools” Is Not an Oxymoron

Open source may bring up pictures of dark alleys and bug-ridden software, but in today’s data-driven world, there’s a new class of solutions. These open-source tools are the basis for inquiries into the deepest complexities of artificial intelligence and big data, designed around the massive data load we create each day. The open-source community works fast, addressing bugs, security loopholes, and the simple need to make streamlined tools for real-time insight. Today’s open-source tools result from years of research and a generation of developers who don’t remember a time when data wasn’t the new oil. Data itself is coming unlocked from previous silos and repositories, existing in a continuous state—data in motion. Leveraging open-source tools allows companies to dream of a reality in which company decisions are data-driven by the second. Every person in the organization has access to the data they need. Enterprises must find open-source tools with layers of capability explicitly designed for their unique data picture. These tools facilitate complex governance without creating pipeline bottlenecks. They provide automated documentation of changes, usage, and authorship.


Threat identification is IT ops' role in SecOps

Identifying important assets helps focus SecOps efforts. Additionally, IT operations teams should base threat identification practices on workflows. The goal is to understand workflows and their properties, as well as the statistical results of valid workflow patterns. IT ops teams can thus recognize the ways in which a workflow deviates from the norm, and potential threats because of this deviation. There are generally two pieces to this process: threat incident logging and tracking, and workflow monitoring for abnormal patterns. Many security threats to IT systems require multiple attempts by the attacker. At least some of these attempts get recognized, reported and logged as violations. However, logging tools often ignore a low volume of incidents. These tools use pattern analysis to indicate an active threat. To help the tools find these patterns, classify threat incidents. For example, a series of incidents from a single location or individual that has rarely generated an incident -- imagine someone entering the wrong password -- is a potential threat indicator. While multiple incidents stemming from one source is suspect, so is a series of incidents generated by different sources. Intruders might try several different IP addresses in an attack, for example. In this example, a pattern of events in the threat incident log will be obvious.


Demystifying Behavior Driven Development with Cucumber-JVM

Keeping aside the fancy terms for end-to-end test writing such as reusability, maintainability, and scalability, I always prefer to have a simple definition for writing them. That is, test cases should be written and arranged in a way that they can run any number of times, in any sequence, and with a variety of different datasets. However, it is not as simple as it sounds. This kind of test writing approach demands different teams to collaborate to discuss product behavior from the very first day. Therefore, Behavior Driven Development is based on a fair collaboration among three amigos (Business Analysts, Developer, and Tester) to its entirety. Intriguingly, the primary reason for the popularity of BDD testing is its non-technical, clear, and concise, plain English [or any other international language of your choice ] language. This way, a business owner can play a significant yet prompt role by specifying the requirement in a language which is understood not just by different teams (developers and testers) but also by the testing framework as well. In our case of Cucumber-JVM, the commonly understandable language is Gherkin, which shapes the overall concept. Gherkin is a language with no technical barriers; 



Quote for the day:

"Hold yourself responsible for a higher standard than anybody expects of you. Never excuse yourself." -- Henry Ward Beecher

Daily Tech Digest - September 06, 2020

Crypto-Friendly Banking Platform Cashaa Expanding in India, US, Africa

India’s cryptocurrency market has been growing rapidly ever since the country’s supreme court quashed the RBI circular that banned financial institutions from providing services to crypto businesses. India currently does not have any direct crypto regulations, but there are rumors of the government discussing the bill submitted by the inter-ministerial committee headed by former Finance Secretary Subhash Chandra Garg, which seeks to ban cryptocurrencies like bitcoin. However, the Indian crypto industry firmly believes that this bill is outdated and will not be the one the government introduces. “The Indian government is currently engaging with various stakeholders and trying to work out a solution. India today stands at a juncture, where it can actually embrace the digital currency ecosystem as it is pushing for the digital revolution and is leading the way in the fintech segment,” Gaurav opined. Cashaa will also focus on the U.S. next year, the CEO explained. “We have already started issuing USD accounts regulated by the Banking Division of Colorado to our existing business customers as beta users,” he further shared with news.Bitcoin.com, adding that some crypto clients already using Cashaa’s USD accounts include Nexo, Coindcx, and Unocoin.


Surging CMS attacks keep SQL injections on the radar during the next normal

Sending malicious commands to a web application can result in disclosure of users’ private data, and the attacker can gain access to a user’s computer. This method of injecting code within the same local execution infrastructure is relatively easy when compared to remote injection, which requires more specialized tools and skills. Here, the remote hacker only needs a security flaw that offers a small window to send commands to the remote execution environment, enabling the malicious code to run without any evaluation. As a result, attackers can create a remote entrance to reach the target environment, and oftentimes the administrator has no knowledge of the system being compromised. Most of the time, attackers make use of remote code execution security flaws that are on the web surface or within different narrow-use and specific ports and protocols. When a CMS is attacked, the remote code execution flaw often results from a connected platform such as the .NET environment, PHP scripting language, or file-sharing service or database that has remote code execution vulnerabilities.


Malware gang uses .NET library to generate Excel docs that bypass security checks

NVISO says the Epic Manchego gang appears to have used EPPlus to generate spreadsheet files in the Office Open XML (OOXML) format. The OOXML spreadsheet files generated by Epic Manchego lacked a section of compiled VBA code, specific to Excel documents compiled in Microsoft's proprietary Office software. Some antivirus products and email scanners specifically look for this portion of VBA code to search for possible signs of malicious Excel docs, which would explain why spreadsheets generated by the Epic Manchego gang had lower detection rates than other malicious Excel files. This blob of compiled VBA code is usually where an attacker's malicious code would be stored. However, this doesn't mean the files were clean. NVISO says that the Epic Manchego simply stored their malicious code in a custom VBA code format, which was also password-protected to prevent security systems and researchers from analyzing its content. But despite using a different method to generate their malicious Excel documents, the EPPlus-based spreadsheet files still worked like any other Excel document.


American Express Establishes Data Analytics, Risk & Technology Lab (DART) In IIT Madras

The company hopes to apply these technologies across dimensions such as employee engagement and attention, evaluating and enhancing the quality of education and learning in school. The Lab at IIT Madras will explore a range of verticals with key emphasis on manufacturing, finance, healthcare, operations management and smart cities. “Our collaboration with IIT Madras reiterates our commitment to support and invest in interventions for public good in the country. The technologies and applied sciences R&D in the Lab will be beneficial for creating an overall societal impact through advancement in financial services, healthcare and safety standards,” said Bharathram Thothadri, EVP and Chief Credit Officer, American Express. It also plans to build talent for industry by partnering with academia while promoting talent and diversity in technology. It has also announced annual scholarships for economically-disadvantaged and meritorious students, including ‘Ambition Awards’ for deserving women students at IIT Madras.


Observability Strategies for Distributed Systems - Lessons Learned

All the panelists said some variation of, "make the easiest path the correct path," with Fong-Jones observing that, "teams are super lazy." Because most teams are focused on developing their service, find ways to create automatic dashboards and update runbooks. Spoons emphasized the need to create machine-readable central documentation. Similarly, using structured logging makes information digestible. That can greatly aid looking for patterns. One of the behaviors to encourage is being able to form and test hypotheses. Having all the data from across a distributed system can become overwhelming, so you need ways to narrow your focus. The practice of site-reliability engineering requires a different mindset than "ordinary" software engineering. Although DevOps has been an attempt to apply software engineering to IT operations, SRE takes an opposite approach when thinking about failure. This can be thought of as the duality between monitoring, which is looking for what is anticipated, and observing, where the focus is on what is unexpected. Each of the panelists had a few pitfalls that they've seen, and hope people will avoid. 


Traditional Banking is an Endangered Species

For banks to survive in a post-COVID-19 world they must review their risk modelling strategies to accommodate the pandemics of the future, rather than falling back to what they know once COVID-19 has been contained. Banks need to ensure that remote working can be provisioned for effectively, in the event of another pandemic, and need to abandon paper processes all together. All of this is easier said than done and banks must spend time on ensuring they are effectively communicating across the entire workforce. For years, banks have been grappling with siloed data and now they must ensure they do not have siloed communications – where time and money could be lost if the workforce are not kept in the loop across the front end e.g. products, solutions and services, and the back end e.g. banking architecture. By harnessing the payments ecosystem, banks can collaborate with technology specialists, to keep up with the pace of demand for international, online payments. ‘Open Banking’ will enable banks to access the right technological expertise to solve the challenges they are facing on a daily basis, and provision fully for the needs of their new, existing and prospective customers.


Cybersecurity Pros Face a Huge Staffing Shortage As Attacks Surge During The Pandemic

Shearer said to fill the talent gap, more outreach needs to be done to recruit younger workers into the aging workforce, as well as more diverse cybersecurity workers. “Diversity is a big part of it — women are underrepresented, it’s improving. We also here in the United states need to look at other underrepresented minority groups and get them into the fold because it’s going to take everyone we can find to be interested in cyber,” he said. “As people start to retire, it’s only going to exacerbate the fact that it’s an undersized cyber workforce.” Jobs can be lucrative in the field as well—(ISC)2′s data finds the average North American salary for cybersecurity professionals is $90,000 a year and those who hold security certifications can make more. ... Hiring has become somewhat easier in recent months, Wysopal says, a silver lining in the face of a broader skilled talent shortage in the industry. As the pandemic forced closures and layoffs in all sectors of the economy, more cyber workers have become available and due to the nature of remote work, candidates that are outside of the area have become more appealing.


SASE vs SD-WAN: A Comparison

SASE’s focus is on providing secure access to distributed resources for the network and its users. The resources can be distributed in private data centers, colocation facilities, and the cloud. As such, security and networking decision-making are baked into the same security tools. SASE products have security tools that reside in a user’s device as a security agent, as well as in the cloud as a cloud-native software stack. For example, the security agent can contain a secure web gateway and a vendor’s cloud can contain a firewall-as-a-service. In a branch office or other location with a collection of people, a SASE appliance is common in order to secure agentless devices like printers. SD-WAN technology was not designed with a focus on security. SD-WAN security is often delivered via secondary features or by third-party vendors. While some SD-WAN solutions do have baked-in security, this is not in the majority. SD-WAN’s central goal is to connect geographically separate offices to each other and to a central headquarters, with flexibility and adaptability to different network conditions. In an SD-WAN, security tools are usually located at offices in CPE rather than on devices themselves. 


3 Predictions For The Role Of Artificial Intelligence In Art And Design

Until we can fully understand the brain’s creative thought processes, it’s unlikely machines will learn to replicate them. As yet, there’s still much we don’t understand about human creativity. Those inspired ideas that pop into our brain seemingly out of nowhere. The “eureka!” moments of clarity that stop us in our tracks. Much of that thought process remains a mystery, which makes it difficult to replicate the same creative spark in machines. Typically, then, machines have to be “told” what to create before they can produce the desired end result. The AI painting that sold at auction? It was created by an algorithm that had been trained on 15,000 pre-20th century portraits, and was programmed to compare its own work with those paintings. ... Intelligent machines have no problem coming up with infinite possible solutions and permutations, and then narrowing the field down to the most suitable options – the ones that best fit the human creative’s “vision”. In this way, machines could help us come up with new creative solutions that we couldn’t possibly have come up with on our own.


Eight case studies on regulating biometric technology show us a path forward

The clearest one was the chapter on India by Nayantara Ranganathan, and the chapter on the Australian facial recognition database by Monique Mann and Jake Goldenfein. Both of these are massive centralized state architectures where the whole point is to remove the technical silos between different state and other kinds of databases, and to make sure that these databases are centrally linked. So you’re creating this monster centralized, centrally linked biometric data architecture. ... The second—and this is a lesson that we keep repeating—consent as a legal tool is very much broken, and it’s definitely broken in the context of biometric data. But that doesn’t mean that it’s useless. Woody Hartzog’s chapter on Illinois’s BIPA [Biometric Information Privacy Act] says: Look, it’s great that we’ve had several successful lawsuits against companies using BIPA, most recently with Clearview AI. But we can’t keep expecting “the consent model” to bring about structural change. Our solution can’t be: The user knows best; the user will tell Facebook that they don’t want their face data collected.



Quote for the day:

"The gem cannot be polished without friction, nor people perfected without trials." -- Confucius