Daily Tech Digest - September 09, 2020

Use cases for AI and ML in cyber security

With more employees working from home, and possibly using their personal devices to complete tasks and collaborate with colleagues more often, it’s important to be wary of scams that are afoot within text messages. “With malicious actors recently diversifying their attack vectors, using Covid-19 as bait in SMS phishing scams, organisations are under a lot of pressure to bolster their defences,” said Brian Foster, senior vice-president of product management at MobileIron. “To protect devices and data from these advanced attacks, the use of machine learning in mobile threat defence (MTD) and other forms of managed threat detection continues to evolve as a highly effective security approach. “Machine learning models can be trained to instantly identify and protect against potentially harmful activity, including unknown and zero-day threats that other solutions can’t detect in time. Just as important, when machine learning-based MTD is deployed through a unified endpoint management (UEM) platform, it can augment the foundational security provided by UEM to support a layered enterprise mobile security strategy. “Machine learning is a powerful, yet unobtrusive, technology that continually monitors application and user behaviour over time so it can identify the difference between normal and abnormal behaviour. ...”


Evilnum group targets FinTech firms with new Python-based RAT

The infection chain also adds a rogue scheduled task called “Adobe Update Task", which executes yet another malicious downloader that poses as Adobe's Flash Player and is called Fplayer.exe. This file is a maliciously modified version of Nvidia's Stereoscopic 3D driver Installer. It seems that the Evilnum attackers have gone to great lengths to maintain persistence and stealth by impersonating a variety of legitimate programs that administrators might not find suspicious on a Windows system. The PyVil RAT talks to the command-and-control (C&C) server using HTTP but the data inside is encrypted with a hard-coded key to hide it from network-level Web traffic inspection products. In the past, Evilnum configured its malware to only talk to command-and-control servers using IP addresses, not domain names. However, Cybereason has detected a growing number of domains being associated with the IP addresses used by the Evilnum C&C infrastructure during the past weeks, signaling a change in tactics as well as a growing infrastructure. The researchers also observed PyVil RAT downloading a custom version of an open-source password dumping tool called LaZagne, a post-exploitation tool that's written in Python and is popular with penetration testers. 


Open source data control for cloud services with Apache Ranger

RBAC is based on the concepts of users, roles, groups, and privileges in an organization. Administrators grant privileges or permissions to pre-defined organizational roles—roles that are assigned to subjects or users based on their responsibility or area of expertise. For example, a user who is assigned the role of a manager might have access to a different set of objects and/or is given permission to perform a broader set of actions on them as compared to a user with the assigned role of an analyst. When the user generates a request to access a data object, the access control mechanism evaluates the role assigned to the user and the set of operations this role is authorized to perform on the object before deciding whether to grant or deny the request. RBAC simplifies the administration of data access controls because concepts such as users and roles are well-understood constructs in a majority of organizations. In addition to being based on familiar database concepts, RBAC also offers administrators the flexibility to assign users to various roles, reassign users from one role to another, and grant or revoke permissions as required. Once an RBAC framework is established, the administrator's role is primarily to assign or revoke users to specific roles. 


Using Measurement to Optimise Remote Work

Citrix’s Remote Works Podcast recently interviewed Laura Giurge, a post-doctoral researcher at London Business School and Oxford University’s Wellbeing Research Centre. Giurge explained that the pandemic has created a "big experiment of working from home." She explained that its findings were challenging the traditional assumption that productivity is measured in hours worked, rather than the impact of an employee’s output. Giurge explained that this required a change in mindset and was particularly challenging for traditional managers: It is really hard for managers, if you are really used to seeing your employees in the office and all of a sudden you’re not. It’s very difficult. But if you start from a mindset of experimentation and understanding there are better ways for experimenting with new ways of working and seeing what works, then you are likely to get your employees to work better and also be happier. Longman wrote that he "calculated the average number of stories" completed "during 2019 and used this as a comparison with 2020 data." By examining trends by month and by quarter he wrote that "both views suggested that the work completed during lockdown was within ... expected levels of volatility." 


Data Labeling for Natural Language Processing: a Comprehensive Guide

Once you have identified your training data, the next big decision is in determining how you’d like to label that data. The labels to be applied can lead to completely different algorithms. One team browsing a dataset of receipts may want to focus on the prices of individual items over time and use this to predict future prices. Another may be focused on identifying the store, date and timestamp and understanding purchase patterns. Practitioners will refer to the taxonomy of a label set. What level of granularity is required for this task? Is it enough to understand that a customer is sending in a customer complaint and route the email to the customer support team? Or would you like to specifically understand which product the customer is complaining about? Or even more specifically, whether they are asking for an exchange/refund, complaining of a defect, an issue in shipping, etc.? Note that the more granular the taxonomy you choose, the more training data will be required for the algorithm to adequately train on each individual label; phrased differently, each label requires a sufficient number of examples, so more labels means more labeled data overall.


Chilean bank shuts down all branches following ransomware attack

The incident is currently being investigated as having originated from a malicious Office document received and opened by an employee. The malicious Office file is believed to have installed a backdoor on the bank's network. Investigators believe that on the night between Friday and Saturday, hackers used this backdoor to access the bank's network and install ransomware. Bank employees working weekend shifts discovered the attack when they couldn't access their work files on Saturday. BancoEstado reported the incident to Chilean police, and on the same day, the Chilean government sent out a nationwide cyber-security alert warning about a ransomware campaign targeting the private sector. While initially, the bank hoped to recover from the attack unnoticed, the damage was extensive, according to sources, with the ransomware encrypting the vast majority of internal servers and employee workstations. The bank initially disclosed the attack on Sunday, but as time went by, bank officials realized employees wouldn't be able to work on Monday, and decided to keep branches closed, while they recover. Luckily, it appears the bank had done its job and properly segmented its internal network, which limited what the hackers could encrypt.


How to ensure cybersecurity and business continuity plans align

Ideally, according to industry good practice, a disruptive incident should trigger an IR plan that assesses the damage and initiates steps to respond quickly to the cyber incident. Results of the IR plan can trigger a BC or a DR plan, or both, based on the nature of the event. BC/DR plans recover and restore critical assets -- people, processes, technology and facilities -- the business needs to function. Cybersecurity plans respond to specific disruptive events and may include an IR plan component to determine the nature of the event before launching response activities. The key is to determine at what point the cybersecurity attack threatens the organization and its ability to conduct business. This suggests that descriptive language should be added to cybersecurity plans to trigger IR, as well as BC/DR plans. Let's assume there's a full complement of plans in place that deal with business- and technology-focused incidents. In some cases, only a specific security strategy or plan -- e.g., information security -- will be needed. In other situations, one or more plans may need to be launched. The figure below depicts a simple decision flow diagram showing how such plan linkages may be arranged and launched in response to a cybersecurity attack.


UK tech sector vacancies up 36% during summer — Tech Nation

“Since lockdown, companies have come to realise that they need industrial-grade technology to run their businesses and tech companies are hiring people to service these new customers, expand and build new products,” said Haakon Overli, co-founder of enterprise software-focused venture fund Dawn Capital. “We’re seeing it right across our portfolio.” The Tech Nation research also suggests that recovery from the pandemic is set to be uneven, with industries such as travel and retail predicted to drastically cut their workforces, while others are able to prosper from changes in customer behaviour. Additionally, the report said the trend of remote working will continue to open up “high-paid, quality” opportunities to residents outside larger cities. Recent research from Culture Shift found that culture has improved for tech sector employees while remote working. However, 50% said they feel isolated while working from home. Despite employment in the UK tech sector looking promising due to the surge in vacancies, the skills gap remains an issue, with two thirds of businesses already have unfilled digital skills vacancies, while 58% say they’ll need significantly more digital skills in the next five years, according to the CBI.


Why More Healthcare Providers are Moving to Public Cloud

In what may be one of the hardest truths of this extraordinary time -- apart from the human suffering -- is that the need for dynamic surge capacity will not disappear when a vaccine is available. As the World Economic Forum has said, we have entered a new era where the risk of future pandemics is high. This forever alters the infrastructure needed to support shifting demands on technology. The public cloud offers the systems resilience that healthcare providers need in order to sustain operations under severe disruption, flexing to address highly volatile customer demand and managing vastly increased needs for remote network access. Providers long viewed investing in the public cloud as a risky business because of security concerns. But over the past two years, many have begun their cloud journey buoyed by other industries’ and research institutions’ embrace of its “deny by default” security posture and, most importantly, limitless opportunities for innovation. There could not be a better time for this. An investment in systems resilience via cloud is an investment in business enablement. A resilient technology infrastructure scales up or down on demand based on real-time changes in usage to support care volume variability. It identifies traffic spikes and automatically adjusts capacity to drive responsiveness with new cost efficiencies.


Cybersecurity Skills Gap Worsens, Fueled by Lack of Career Development

The fundamental causes for the skill gap are myriad, starting with a lack of training and career-development opportunities. About 68 percent of the cybersecurity professionals surveyed by ESG/ISSA said they don’t have a well-defined career path, and basic growth activities, such as finding mentor, getting basic cybersecurity certifications, taking on cybersecurity internships and joining a professional organization, are missing steps in their endeavors. The survey also found that many professionals start out in IT, and find themselves working in cybersecurity without a complete skill set. A full 63 percent of respondents in the survey said they’ve worked in cybersecurity for less than three years, with 76 percent starting as IT professionals before switching their career to cybersecurity. “Cybersecurity professionals often muddle through their careers with little direction, jumping from job to job and enhancing their skill sets on the fly rather than in any systematic way,” according to the report. To go along with this, the survey asked respondents to speculate on how long it takes a cybersecurity professional to become proficient at the job. The highest percentage of respondents (39 percent) believe it takes anywhere from three to five years to develop real cybersecurity proficiency, while 22 percent say two to three years, and 18 percent claim it takes more than five years.



Quote for the day:

"It's fine to celebrate success but it is more important to heed the lessons of failure." -- Bill Gates

Daily Tech Digest - September 08, 2020

Closing The Tech Skills Gap: 3 Key Factors For CEOs To Consider

Limited resources and tightened budgets have placed restrictions on hiring new talent and several industries were left scrambling to reskill and quickly adapt. While hiring new talent seems like a valid solution, in reality, the hiring, onboarding and culture development process requires a significant amount of time and dedication, impacting the overall company’s output. As enterprises continue to identify ways to do more with less now is an opportune time for reskilling and upskilling initiatives to become part of the “new norm.”Reskilling and upskilling initiatives are not only beneficial to employees but impactful to the enterprise. According to a recent study, nearly 30% of employees feel their skills will be redundant within the next two years, with 50% of those in Gen-Y and Gen-Z indicating that their skills will be irrelevant within the next four to five years. Although technology tends to create more jobs than it takes away, those fears are still incredibly prevalent. A workforce of the future must be prepared to welcome change and remain agile, they also must have the support and resources to further enhance their skills. Furthermore, employees will find comfort in knowing their company wants to invest in them and their future—and loyalty will likely follow.


Stretch or safe? The art of setting goals for your teams

With so little clarity about the future, how can leaders set business goals for the next six months to a year? During the dozen years between the 2008 financial crisis and the current pandemic, the world seemed far more stable, and budgeting was more of a predictable process. But now? Who knows. We are living in an era of VUCA, an acronym coined by the U.S. Army War College that stands for volatility, uncertainty, complexity, and ambiguity. This uncertainty is raising new challenges for a fundamental leadership skill: goal setting. It is as much an art as a science, because it requires finding the sweet spot between the aspirational and the realistic. Yes, there is something galvanizing and inspirational about a big stretch goal, as President John F. Kennedy knew in 1961, when he announced that the United States would put a man on the moon by the end of the decade, even though the longest time any American had spent in space was barely 15 minutes. The business leader’s job is to set an ambitious target that will bring out the best in a company’s teams and achieve what may seem impossible at first. These are the BHAGs — or big, hairy, audacious goals, in the words of Jim Collins, the author of Good to Great and other books.


Can AI help with your quest for global talent?

For candidates, AI can help to eliminate some of the most problematic human flaws in the recruitment process: hiring bias. Although often unintentional, stereotypes and personal prejudices are something which even the most conscientious recruiters can fall foul of. AI allows for blind applicant screening and levels the playing field. Chatbots can also help to improve the candidate experience and engagement by offering immediate replies to inquiries or queries, simple job applications and ongoing assistance throughout the process. Employers and HR personnel can benefit massively from AI, too. For starters, it can be used to scan CVs for certain keywords to shortlist the most suitable candidates intelligently. Predictive analysis can even determine which candidates are more likely to succeed in the roles — helping to improve the quality of the hire and ensure only the most retainable talents are brought on board. AI can also help companies reach passive candidates who aren’t actively seeking a new role — which can often be one of the best applicant pools. In the past, reaching these candidates involved poring through CV databases, lots of cold-calling and even more dead ends. 


99 Ransomware Problems - and a Decryptor Ain't One

Security experts say that more organizations have been putting in place viable defenses against ransomware, including frequently backing up all systems, and storing those backups offline. As a result, if they suffer a ransomware infection, they can simply wipe systems and restore from backups, without having to even consider paying a ransom. In response, beginning in November 2019, the Maze gang began exfiltrating data before crypto-locking systems, then using the threat of data leaking to try and force more victims to pay. Unfortunately, this strategy not only worked, but has been emulated by numerous other gangs ... Unfortunately, the move to exfiltrate data, name-and-shame victims and so on has been leading to higher profits for criminals. In numerous recent cases, despite being able to fully restore data from backups, victims have then felt "compelled to have to engage in an extortion negotiation and potentially a payment to a threat actor because of the potential for what they deemed to be irreparable harm to their business if the information is leaked, and so they end up paying to prevent that," says Coveware CEO Bill Siegel.


The New Capabilities in Endpoint Security for Businesses

Surprisingly, endpoint security evolved perhaps the most of any branch of cybersecurity. After all, look at the history of these critical business-level solutions. First, the only needed to protect a determined set of physical, on-premises devices from known malware and viruses. A simple antivirus solution could do the trick many times over. However, enterprises face an increasingly complex IT and device environment that in no way resembles ages past. For example, you need to contend with the increased necessity of remote work in the wake of COVID-19; in fact, these changes might result in permanent reassessments of work-from-home policies. That means new endpoints operating on personal Wi-Fi or public Wi-Fi connections, both of which pose cybersecurity challenges in terms of visibility and consistency. Additionally, those endpoints connecting to corporate networks are also undergoing changes. No less an authority than Gartner noted that bring-your-own-devices (BYOD) as a term may not adequately describe the situation. It might more accurately be summarized as Bring-Your-Own-PC (BYOPC), which adds another layer of endpoint security complexity. 


How Diffblue uses AI to automate unit testing for Java applications

The irony, as Diffblue CEO Mathew Lodge pointed out in an interview, is how late the software industry is in embracing AI to improve software development, given how we've used AI to automate and disrupt so many other industries--from retail, travel, transportation, manufacturing, and more. Lodge said Diffblue researchers took advantage of the machine learning strategy that powered AlphaGo, Alphabet subsidiary DeepMind's software program that beat the world champion player of Go. While the company is starting with a Java solution (by far the most popular language in the Global 2000 where companies invest heavily in productivity tools), its technology can also be used to automate testing for most programming languages such as Python, JavaScript, and C#, among others.  Among the first customers to roll out Diffblue's solution is Goldman Sachs (with an annual IT budget larger than many countries' GDP). Using Diffblue's AI on one module with an important backend system, Goldman Sachs was able to expand existing unit test coverage from 36% to 72% in less than 24 hours, a feat that would have required more than eight days of developer time if done manually. Developer time savings? 90%.


The Cloud Is Not The Edge

Over the last 15 years, we have seen major growth in social and mobile categories and SaaS offerings. Most recently, a new technology has emerged called the internet of things (IoT), and it demands a new type of computing called edge computing.  Today, as we shift from doing all processing on Amazon Web Services or Microsoft Azure computers and move it to our businesses, construction zones, farms and trucks, we hear that edge computing will be “bigger than the cloud.” This new type of computing will provide augmented reality for remote service, real-time monitoring of equipment in the field, optimizations for natural resources and machine-learned energy efficiencies, among other returns. While it’s tempting to believe we can just move our cloud applications to the edge, this is not possible. Furthermore, companies that take such a strategy will struggle for years to come because the cloud is fundamentally different from edge computing. ... Edge-native architectures should expect a diverse infrastructure for deployment. This means that edge applications should easily run on bare metal processors, virtual machines and containers. Conversely, cloud offerings and services are built and heavily tuned for a single type of environment and cannot run anywhere.


Low-Code Revolution to Prepare Manufacturers for Industry 4.0

Manufacturing Industry is experiencing a digitization move. Manufacturing Sector has already adopted digital technologies like artificial intelligence, augmented reality, robotics, additive manufacturing, etc. These technologies had enabled them to have a competitive advantage in terms of manufacturing efficiency and cost. Due to pandemic traditional supply chains and manufacturing environments are crumbling, so there is a need to move towards a digitally-driven, more flexible agile approach. In these challenging times, many of the leading companies are innovating and developing their applications. Businesses that tailor their existing technical capability and resources on digital technology can limit the COVID-19 ‘s impact. In times like these when there are limited resources and less time to build applications for business continuity, businesses are relying on Low-Code technology to create and pilot new applications for business continuity at rapid rates. Low Code platforms are becoming popular among manufacturing companies as they deliver customized solutions and offer flexibility, scalability, and efficient technological innovation.


Five lessons for digital transformation success

Developing the right talents and skills is one of the important transformation initiatives. While some people might immediately say digital technologies are the key success factor, those who are experienced in the process would say that’s not necessarily so. Chan Suh, chief digital officer of business transformation specialist Prophet, warns against being seduced by the promises of technology’s magical tools for creating revenue growth. While businesses may need digital innovations such as artificial intelligence for deep insight, tech stacks are just tools and, without the right operating instructions, they either lie fallow or become money pits. Suh says it’s a mistake that has cost global businesses billions of dollars in wasted investments. “We need the conceptual strategies and innovations to guide our tech investments as well as the human expertise to use it properly. However, that human expertise is especially rare when it comes to navigating the highly complicated interdependencies of digitally powered businesses,” he says. With building capability, the key is the right mix of human expertise and technology working in a coherent, flexible operating model with the customer at the centre.


Delivering on your promises

Bertini and Koenigsberg make an impassioned and ambitious case for rewriting the rules of commerce. They argue that although customers want to buy a solution to a “job that needs to be done” (in the words of Clay Christensen), they’re offered only the means to buy that solution, typically by taking ownership of a product. This is due to a “a combination of neglect, inertia, fear of change, and comfort with the status quo” on the part of companies. Buying a product (e.g., an engine) isn’t always a good proxy for the end goal (e.g., reliable high performance). Reserving particular wrath for healthcare, education, and advertising, the authors focus on three forms of waste in the exchange between companies and customers: (1) access — customers can’t get the product (e.g., a car) they want because of the cost or a lack of stock; (2) consumption — they don’t or can’t use what’s offered (e.g., bundles of TV programs or a car that sits unused 90 percent of the time); and (3) performance — the product doesn’t deliver the value customers expect. “Lean commerce,” in which the fortunes of companies depend explicitly on delivering value to the customer, is a much more efficient model. To determine value, the authors use an end or outcome that can be easily understood, verified, and quantified. Feeling happy or amused is hard to measure, but measuring a laugh is easier.



Quote for the day:

"When Things Fall Apart " is when we usually have the most to learn about ourselves." -- Oprah

Daily Tech Digest - September 07, 2020

Brain-Inspired Electronic System Could Make AI 1,000 Times More Energy Efficient

In the new study, published in Nature Communications, engineers at UCL found that accuracy could be greatly improved by getting memristors to work together in several sub-groups of neural networks and averaging their calculations, meaning that flaws in each of the networks could be canceled out. Memristors, described as “resistors with memory,” as they remember the amount of electric charge that flowed through them even after being turned off, were considered revolutionary when they were first built over a decade ago, a “missing link” in electronics to supplement the resistor, capacitor, and inductor. They have since been manufactured commercially in memory devices, but the research team say they could be used to develop AI systems within the next three years. Memristors offer vastly improved efficiency because they operate not just in a binary code of ones and zeros, but at multiple levels between zero and one at the same time, meaning more information can be packed into each bit. Moreover, memristors are often described as a neuromorphic (brain-inspired) form of computing because, like in the brain, processing and memory are implemented in the same adaptive building blocks, in contrast to current computer systems that waste a lot of energy in data movement.


Management skills: Five ways building your network will help you get ahead

Mark Gannon, director of business change and information solutions at Sheffield City Council, says smart digital leaders make sure they carry on learning – even once they get to the very top. Gannon says developing experiences outside the day job has always been important to him, both as full-time CIO and in his stint as a consultant before joining the council. "There's the basic stuff about just getting out there and understanding your customers and spending time to speak with them. Consulting was interesting because it gave me the opportunity to look outside my own experience and see what other organisations were doing. I think it's really important to be constantly learning," he says. Gannon suggests his determination to develop new skills might be something to do with having completed a doctorate prior to joining the IT profession. His interest in education continues to this day – Gannon is a school parent governor. "Being a governor is interesting and getting out and engaging with other networks in the city is something I do a lot. We've developed a cross-community network, called dotSHF, which is about how we bring together the work that's being done by sole traders, and private and public sector organisations around digital," says Gannon.


Telling tales: using behavioural AI to reconstruct attack storylines

Behavioral AI can be used to mitigate automatically—a seriously powerful gamechanger. The technology is capable of making a decision on the device, without relying on the cloud, or on humans, to tell it what to do. Monitoring behaviour is a tricky, complex problem, and you want to feed your algorithm robust, informative, context-rich data which really captures the essence of a program’s execution. To do this, you need to monitor the operating system at a very low level and, most importantly, link individual behaviours together to create full “storylines”. For example, if a program executes another program, or uses the operating system to schedule itself to execute on boot up, you don’t want to consider these different, isolated executions, but a single story. Training AI models on behavioural data is similar to training static models, but with the added complexity of the time dimension. In other words, instead of evaluating all features at once, you need to consider cumulative behaviours up to various points in time. Interestingly, if you have good enough data, you don’t really need an AI model to convict an execution as malicious. For example, if the program starts executing but has no user interaction, then it tries to register itself to start when the machine is booted, then it starts listening to keystrokes, you could say it’s very likely a keylogger and should be stopped. 


Microsoft Updates Edge With Exciting New Features To Beat Chrome

Microsoft’s Edge browser is growing in popularity, reaching the number two position in the desktop browser market, even beating privacy-focused option Firefox. Now Microsoft has just unveiled a bunch of new features that make it a valid alternative to Google Chrome as an increasing number of people work from home. One very useful update which would be great if it comes to fruition was spotted by Windows Latest in the Edge Canary developer build is a new feature called “Web Capture” which allows you to take a screenshot of a webpage—in full or cropped—and copy it to the clipboard or preview it. ... Meanwhile, more new features to boost your security are expected in Edge 86, which is due to drop in the next few weeks, Microsoft has confirmed. This includes new alerts for the Edge password monitor if a compromised password is detected. At the same time, Edge will add the option to show or hide the favorites bar from the favorites management page. Edge will also add policy improvements for enterprises using the browser for various users and applications. Just last week, Microsoft started to roll out Edge 85 with multiple features aiming to help those working from home during the coronavirus pandemic.


How AI will automate cybersecurity in the post-COVID world

At a basic level, AI uses data to make predictions and then automates actions. This automation can be used for good or evil. Cybercriminals take AI designed for legitimate purposes and use it for illegal schemes. Consider one of the most common defenses attempted against credential stuffing – CAPTCHA. Invented a couple of decades ago, CAPTCHA tries to protect against unwanted bots by presenting a challenge (e.g., reading distorted text) that humans should find easy and bots should find difficult. Unfortunately, cybercriminal use of AI has inverted this. Google did a study a few years ago and found that machine-learning based optical character recognition (OCR) technology could solve 99.8% of CAPTCHA challenges. This OCR, as well as other CAPTCHA-solving technology, is weaponized by cybercriminals who include it in their credential stuffing tools. Cybercriminals can use AI in other ways too. AI technology has already been created to make cracking passwords faster, and machine learning can be used to identify good targets for attack, as well as to optimize cybercriminal supply chains and infrastructure. We see incredibly fast response times from cybercriminals, who can shut off and restart attacks with millions of transactions in a matter of minutes.


The Principles of Planning and Implementing Microservices

Each service should have a version, which updates regularly in every release. Versioning allows to identify a service and deploy a specific version of it. It also enables the consumers of the service to be aware when the service has changed, and by that avoid breaking the existing contract and the communication between the services. Different versions of the same service can coexist. With that, the migration from the old version of the new version can be gradual without having too much impact on the whole application. ... In a microservices environment, there are many small services that communicate constantly with each other, so it is easier to get lost in what the service does or how to use its API. Documentation can facilitate that. Keeping valid up-to-date documentation is a tedious and time-consuming task. Naturally, this can be prioritised low in the tasks list of the developer. Therefore, automation is required instead of documenting manually (readme files, notes, procedures). There are various tools to codify and automate tasks to keep the documentation updated while the code continues to change. Tools like Swagger UI or API Blueprint can do the job. They can generate a web UI for your microservices API, which alleviates the orientation efforts. once again, standardization is an advantage; for example, Swagger implements the OpenAPI specification, which is an industry-standard.


How Cybercriminals Take the Fun Out of Gaming

The underground market is also active. In a recent blog, Singer broke down the world of cybercrime in games. "The first thing to understand about the criminals who attack the games industry is that they participate in a working, fluid, day-to-day economy that they manage completely themselves," he wrote. "Cybercriminals have built informal structures that mirror the efficiencies of standard enterprise operations. They have developers, QA folks, middle managers, project managers, salespeople, and even marketing and PR people who hype vendors and products." Austin Francisco, security analyst at Key Cyber Solutions (KCS) – who has "been gaming since the '90s" – says hackers advertise stolen goods and cheats as "a product and not like a hack," offering player values such as the ability to "have 100% accuracy aim" or "see people through walls," for example. Singer doesn't understand the appeal, but "there are enough people who enjoy it that there's a thriving industry," he says. One popular attack is account takeovers (ATO), which is used to steal other players' goods. It's a large market due to the sheer amount of value tied to a player account: from in-game currencies to achievements unlocked to player status and "skins"


“Enterprise-Class Open-Source Data Tools” Is Not an Oxymoron

Open source may bring up pictures of dark alleys and bug-ridden software, but in today’s data-driven world, there’s a new class of solutions. These open-source tools are the basis for inquiries into the deepest complexities of artificial intelligence and big data, designed around the massive data load we create each day. The open-source community works fast, addressing bugs, security loopholes, and the simple need to make streamlined tools for real-time insight. Today’s open-source tools result from years of research and a generation of developers who don’t remember a time when data wasn’t the new oil. Data itself is coming unlocked from previous silos and repositories, existing in a continuous state—data in motion. Leveraging open-source tools allows companies to dream of a reality in which company decisions are data-driven by the second. Every person in the organization has access to the data they need. Enterprises must find open-source tools with layers of capability explicitly designed for their unique data picture. These tools facilitate complex governance without creating pipeline bottlenecks. They provide automated documentation of changes, usage, and authorship.


Threat identification is IT ops' role in SecOps

Identifying important assets helps focus SecOps efforts. Additionally, IT operations teams should base threat identification practices on workflows. The goal is to understand workflows and their properties, as well as the statistical results of valid workflow patterns. IT ops teams can thus recognize the ways in which a workflow deviates from the norm, and potential threats because of this deviation. There are generally two pieces to this process: threat incident logging and tracking, and workflow monitoring for abnormal patterns. Many security threats to IT systems require multiple attempts by the attacker. At least some of these attempts get recognized, reported and logged as violations. However, logging tools often ignore a low volume of incidents. These tools use pattern analysis to indicate an active threat. To help the tools find these patterns, classify threat incidents. For example, a series of incidents from a single location or individual that has rarely generated an incident -- imagine someone entering the wrong password -- is a potential threat indicator. While multiple incidents stemming from one source is suspect, so is a series of incidents generated by different sources. Intruders might try several different IP addresses in an attack, for example. In this example, a pattern of events in the threat incident log will be obvious.


Demystifying Behavior Driven Development with Cucumber-JVM

Keeping aside the fancy terms for end-to-end test writing such as reusability, maintainability, and scalability, I always prefer to have a simple definition for writing them. That is, test cases should be written and arranged in a way that they can run any number of times, in any sequence, and with a variety of different datasets. However, it is not as simple as it sounds. This kind of test writing approach demands different teams to collaborate to discuss product behavior from the very first day. Therefore, Behavior Driven Development is based on a fair collaboration among three amigos (Business Analysts, Developer, and Tester) to its entirety. Intriguingly, the primary reason for the popularity of BDD testing is its non-technical, clear, and concise, plain English [or any other international language of your choice ] language. This way, a business owner can play a significant yet prompt role by specifying the requirement in a language which is understood not just by different teams (developers and testers) but also by the testing framework as well. In our case of Cucumber-JVM, the commonly understandable language is Gherkin, which shapes the overall concept. Gherkin is a language with no technical barriers; 



Quote for the day:

"Hold yourself responsible for a higher standard than anybody expects of you. Never excuse yourself." -- Henry Ward Beecher

Daily Tech Digest - September 06, 2020

Crypto-Friendly Banking Platform Cashaa Expanding in India, US, Africa

India’s cryptocurrency market has been growing rapidly ever since the country’s supreme court quashed the RBI circular that banned financial institutions from providing services to crypto businesses. India currently does not have any direct crypto regulations, but there are rumors of the government discussing the bill submitted by the inter-ministerial committee headed by former Finance Secretary Subhash Chandra Garg, which seeks to ban cryptocurrencies like bitcoin. However, the Indian crypto industry firmly believes that this bill is outdated and will not be the one the government introduces. “The Indian government is currently engaging with various stakeholders and trying to work out a solution. India today stands at a juncture, where it can actually embrace the digital currency ecosystem as it is pushing for the digital revolution and is leading the way in the fintech segment,” Gaurav opined. Cashaa will also focus on the U.S. next year, the CEO explained. “We have already started issuing USD accounts regulated by the Banking Division of Colorado to our existing business customers as beta users,” he further shared with news.Bitcoin.com, adding that some crypto clients already using Cashaa’s USD accounts include Nexo, Coindcx, and Unocoin.


Surging CMS attacks keep SQL injections on the radar during the next normal

Sending malicious commands to a web application can result in disclosure of users’ private data, and the attacker can gain access to a user’s computer. This method of injecting code within the same local execution infrastructure is relatively easy when compared to remote injection, which requires more specialized tools and skills. Here, the remote hacker only needs a security flaw that offers a small window to send commands to the remote execution environment, enabling the malicious code to run without any evaluation. As a result, attackers can create a remote entrance to reach the target environment, and oftentimes the administrator has no knowledge of the system being compromised. Most of the time, attackers make use of remote code execution security flaws that are on the web surface or within different narrow-use and specific ports and protocols. When a CMS is attacked, the remote code execution flaw often results from a connected platform such as the .NET environment, PHP scripting language, or file-sharing service or database that has remote code execution vulnerabilities.


Malware gang uses .NET library to generate Excel docs that bypass security checks

NVISO says the Epic Manchego gang appears to have used EPPlus to generate spreadsheet files in the Office Open XML (OOXML) format. The OOXML spreadsheet files generated by Epic Manchego lacked a section of compiled VBA code, specific to Excel documents compiled in Microsoft's proprietary Office software. Some antivirus products and email scanners specifically look for this portion of VBA code to search for possible signs of malicious Excel docs, which would explain why spreadsheets generated by the Epic Manchego gang had lower detection rates than other malicious Excel files. This blob of compiled VBA code is usually where an attacker's malicious code would be stored. However, this doesn't mean the files were clean. NVISO says that the Epic Manchego simply stored their malicious code in a custom VBA code format, which was also password-protected to prevent security systems and researchers from analyzing its content. But despite using a different method to generate their malicious Excel documents, the EPPlus-based spreadsheet files still worked like any other Excel document.


American Express Establishes Data Analytics, Risk & Technology Lab (DART) In IIT Madras

The company hopes to apply these technologies across dimensions such as employee engagement and attention, evaluating and enhancing the quality of education and learning in school. The Lab at IIT Madras will explore a range of verticals with key emphasis on manufacturing, finance, healthcare, operations management and smart cities. “Our collaboration with IIT Madras reiterates our commitment to support and invest in interventions for public good in the country. The technologies and applied sciences R&D in the Lab will be beneficial for creating an overall societal impact through advancement in financial services, healthcare and safety standards,” said Bharathram Thothadri, EVP and Chief Credit Officer, American Express. It also plans to build talent for industry by partnering with academia while promoting talent and diversity in technology. It has also announced annual scholarships for economically-disadvantaged and meritorious students, including ‘Ambition Awards’ for deserving women students at IIT Madras.


Observability Strategies for Distributed Systems - Lessons Learned

All the panelists said some variation of, "make the easiest path the correct path," with Fong-Jones observing that, "teams are super lazy." Because most teams are focused on developing their service, find ways to create automatic dashboards and update runbooks. Spoons emphasized the need to create machine-readable central documentation. Similarly, using structured logging makes information digestible. That can greatly aid looking for patterns. One of the behaviors to encourage is being able to form and test hypotheses. Having all the data from across a distributed system can become overwhelming, so you need ways to narrow your focus. The practice of site-reliability engineering requires a different mindset than "ordinary" software engineering. Although DevOps has been an attempt to apply software engineering to IT operations, SRE takes an opposite approach when thinking about failure. This can be thought of as the duality between monitoring, which is looking for what is anticipated, and observing, where the focus is on what is unexpected. Each of the panelists had a few pitfalls that they've seen, and hope people will avoid. 


Traditional Banking is an Endangered Species

For banks to survive in a post-COVID-19 world they must review their risk modelling strategies to accommodate the pandemics of the future, rather than falling back to what they know once COVID-19 has been contained. Banks need to ensure that remote working can be provisioned for effectively, in the event of another pandemic, and need to abandon paper processes all together. All of this is easier said than done and banks must spend time on ensuring they are effectively communicating across the entire workforce. For years, banks have been grappling with siloed data and now they must ensure they do not have siloed communications – where time and money could be lost if the workforce are not kept in the loop across the front end e.g. products, solutions and services, and the back end e.g. banking architecture. By harnessing the payments ecosystem, banks can collaborate with technology specialists, to keep up with the pace of demand for international, online payments. ‘Open Banking’ will enable banks to access the right technological expertise to solve the challenges they are facing on a daily basis, and provision fully for the needs of their new, existing and prospective customers.


Cybersecurity Pros Face a Huge Staffing Shortage As Attacks Surge During The Pandemic

Shearer said to fill the talent gap, more outreach needs to be done to recruit younger workers into the aging workforce, as well as more diverse cybersecurity workers. “Diversity is a big part of it — women are underrepresented, it’s improving. We also here in the United states need to look at other underrepresented minority groups and get them into the fold because it’s going to take everyone we can find to be interested in cyber,” he said. “As people start to retire, it’s only going to exacerbate the fact that it’s an undersized cyber workforce.” Jobs can be lucrative in the field as well—(ISC)2′s data finds the average North American salary for cybersecurity professionals is $90,000 a year and those who hold security certifications can make more. ... Hiring has become somewhat easier in recent months, Wysopal says, a silver lining in the face of a broader skilled talent shortage in the industry. As the pandemic forced closures and layoffs in all sectors of the economy, more cyber workers have become available and due to the nature of remote work, candidates that are outside of the area have become more appealing.


SASE vs SD-WAN: A Comparison

SASE’s focus is on providing secure access to distributed resources for the network and its users. The resources can be distributed in private data centers, colocation facilities, and the cloud. As such, security and networking decision-making are baked into the same security tools. SASE products have security tools that reside in a user’s device as a security agent, as well as in the cloud as a cloud-native software stack. For example, the security agent can contain a secure web gateway and a vendor’s cloud can contain a firewall-as-a-service. In a branch office or other location with a collection of people, a SASE appliance is common in order to secure agentless devices like printers. SD-WAN technology was not designed with a focus on security. SD-WAN security is often delivered via secondary features or by third-party vendors. While some SD-WAN solutions do have baked-in security, this is not in the majority. SD-WAN’s central goal is to connect geographically separate offices to each other and to a central headquarters, with flexibility and adaptability to different network conditions. In an SD-WAN, security tools are usually located at offices in CPE rather than on devices themselves. 


3 Predictions For The Role Of Artificial Intelligence In Art And Design

Until we can fully understand the brain’s creative thought processes, it’s unlikely machines will learn to replicate them. As yet, there’s still much we don’t understand about human creativity. Those inspired ideas that pop into our brain seemingly out of nowhere. The “eureka!” moments of clarity that stop us in our tracks. Much of that thought process remains a mystery, which makes it difficult to replicate the same creative spark in machines. Typically, then, machines have to be “told” what to create before they can produce the desired end result. The AI painting that sold at auction? It was created by an algorithm that had been trained on 15,000 pre-20th century portraits, and was programmed to compare its own work with those paintings. ... Intelligent machines have no problem coming up with infinite possible solutions and permutations, and then narrowing the field down to the most suitable options – the ones that best fit the human creative’s “vision”. In this way, machines could help us come up with new creative solutions that we couldn’t possibly have come up with on our own.


Eight case studies on regulating biometric technology show us a path forward

The clearest one was the chapter on India by Nayantara Ranganathan, and the chapter on the Australian facial recognition database by Monique Mann and Jake Goldenfein. Both of these are massive centralized state architectures where the whole point is to remove the technical silos between different state and other kinds of databases, and to make sure that these databases are centrally linked. So you’re creating this monster centralized, centrally linked biometric data architecture. ... The second—and this is a lesson that we keep repeating—consent as a legal tool is very much broken, and it’s definitely broken in the context of biometric data. But that doesn’t mean that it’s useless. Woody Hartzog’s chapter on Illinois’s BIPA [Biometric Information Privacy Act] says: Look, it’s great that we’ve had several successful lawsuits against companies using BIPA, most recently with Clearview AI. But we can’t keep expecting “the consent model” to bring about structural change. Our solution can’t be: The user knows best; the user will tell Facebook that they don’t want their face data collected.



Quote for the day:

"The gem cannot be polished without friction, nor people perfected without trials." -- Confucius

Daily Tech Digest - September 05, 2020

A virtuous cycle: how councils are using AI to manage healthier travel

With local lockdowns being a new threat, councils face fresh calls to gather and understand social distancing requirements. This isn’t just in large towns and cities; local authorities need to be able to assess and understand risk across broader geographical areas to keep people safe. More small towns and villages are already installing cameras and sensors (or upgrading their current infrastructure) to capture data in their streets to identify places where people struggle to social distance. The city of Oxford, too, has implemented a large scale deployment of cycling specific sensors. Councils and other local authorities are taking their responsibilities seriously. Aiding all this is AI. Artificial intelligence can underpin a council’s strategy for coping with the Active Travel boom. In practical terms, this means positioning cameras at busy junctions, on popular footpaths, and around town and city centres, then analysing what those cameras see. It’s not just a numbers game, although knowing with confidence how many people are travelling in a certain area on a given day will certainly be useful. AI can quickly identify where social distancing is struggling to be adhered to due to road or path layout, and spot dangerous behaviour such as undertaking or cyclists riding on pavements.


Inclusion And Ethics In Artificial Intelligence

The computer science and Artificial Intelligence (AI) communities are starting to awaken to the profound ways that their algorithms will impact society and are now attempting to develop guidelines on ethics for our increasingly automated world. The systems we require for sustaining our lives increasingly rely upon algorithms to function. More things are becoming increasingly automated in ways that impact all of us. Yet, the people who are developing the automation, machine learning, and the data collection and analysis that currently drive much of this automation do not represent all of us and are not considering all our needs equally. However, not all ethics guidelines are developed equally — or ethically. Often, these efforts fail to recognize the cultural and social differences that underlie our everyday decision making and make general assumptions about both what a “human” and “ethical human behavior”. As part of this approach, the US federal government launched AI.gov to make it easier to access all of the governmental AI initiatives currently underway. The site is the best single resource from which to gain a better understanding of the US AI strategy.


Our quantum internet breakthrough could help make hacking a thing of the past

Our current way of protecting online data is to encrypt it using mathematical problems that are easy to solve if you have a digital “key” to unlock the encryption but hard to solve without it. However, hard does not mean impossible and, with enough time and computer power, today’s methods of encryption can be broken. Quantum communication, on the other hand, creates keys using individual particles of light (photons) , which – according to the principles of quantum physics – are impossible to make an exact copy of. Any attempt to copy these keys will unavoidably cause errors that can be detected. This means a hacker, no matter how clever or powerful they are or what kind of supercomputer they possess, cannot replicate a quantum key or read the message it encrypts. This concept has already been demonstrated in satellites and over fibre-optic cables, and used to send secure messages between different countries. So why are we not already using in everyday life? The problem is that it requires expensive, specialised technology that means it’s not currently scalable. Previous quantum communication techniques were like pairs of children’s walkie talkies. 


Two Tools Every Data Scientist Should Use For Their Next ML Project

One of the key value propositions of data management is to deliver data to internal and external stakeholders in required quality for different purposes. Data management sets up data value chains that turn raw data into meaningful information. Different data management capabilities should enable data value chains. The core data management capabilities taken into the “Orange” model are data modeling, information systems architecture, data quality, and data governance. In Figure 1, they are marked orange. These capabilities are performed by data management professionals. Other capabilities that belong to other domains like IT, security, and other business support functions. To implement a data management capability, a company should establish a formal data management function. The data management function will become operational by implementing four key components that enable data management capability such as processes, roles, tools, and data ... To make the evidence objective, it should be measurable. This is the second criterion. For example, you can prove your progress by demonstrating the number of data quality issues resolved within a specified period. You should also compare the planned and achieved resolved issues.


The fourth generation of AI is here, and it’s called ‘Artificial Intuition’

The fourth generation of AI is ‘artificial intuition,’ which enables computers to identify threats and opportunities without being told what to look for, just as human intuition allows us to make decisions without specifically being instructed on how to do so. It’s similar to a seasoned detective who can enter a crime scene and know right away that something doesn’t seem right, or an experienced investor who can spot a coming trend before anybody else. The concept of artificial intuition is one that, just five years ago, was considered impossible. But now companies like Google, Amazon and IBM are working to develop solutions, and a few companies have already managed to operationalize it. So, how does artificial intuition accurately analyze unknown data without any historical context to point it in the right direction? The answer lies within the data itself. Once presented with a current dataset, the complex algorithms of artificial intuition are able to identify any correlations or anomalies between data points.  Of course, this doesn’t happen automatically. First, instead of building a quantitative model to process the data, artificial intuition applies a qualitative model.


Vulnerability Management: Is In-Sourcing or Outsourcing Right for You?

While size is a factor, both small and large companies can benefit from leveraging the expertise of a partner. Small companies can get enterprise-level services for a fraction of the cost of supporting full time employees; large companies can relieve their IT departments of time-consuming tasks and still save money. This allows for both to focus on their core competencies – the outsource provider brings platform and process expertise to the table to help guide program maturity while handling the grind of scanning, analysis and reporting. This frees up the customer organization to focus on operating their business and handling strategic technology initiatives. A qualified third-party company that specializes in VM already has the certified security professionals on board who are not only up to speed with the latest threats, but always use the most effective detection tools and are in the loop of important new information. If you answered in the affirmative to outsourcing VM, you’ll want to know how to select a company that is truly going to help you shore up the weaknesses in your defenses. First, you want one that has years of experience protecting businesses and offers dedicated support 24/7. 


How to drive business value through balanced development automation

Operationally, challenges stem from misalignment in understanding who the end customer really is. Companies often design products and services for themselves and not for the end customer. Once an organization focuses on the end user and how they are going to use that product and service, the shift in thinking occurs. Now it’s about looking at what activities need to be done to provide value to that end customer. Thinking this way, there will be features, functions, and processes never done before. In the words of Stephen Covey, “Keep the main thing the main thing”. What is the main thing? The customer. What features and functionality do you need for each of them from a value perspective? And you need to add governance to that. Effective governance ensures delivery of a quality product or service that meets your objectives without monetary or punitive pain. The end customer benefits from that product or service having effective and efficient governance. That said, heavy governance is also waste. There has to be a tension and a flow or a balance between Hierarchical Governance and Self Governance where the role of every person in the organization is clearly aligned in their understanding of value contributed to the end customer.


Microservices Governance and API Management

Different microservices teams can have their own lifecycle definitions and different user roles to manage the lifecycle state transfer. That allows teams to work autonomously. At the same time, WSO2 digital asset governance solution allows these teams to create custom lifecycles and attach them to the services that they implement. As part of that, there can be roles that verify the overall governance across multiple teams by making sure that everyone follows the industry best practices that are accepted by the business. As an example, If the industry best practice is to use Open API Specification for API definitions, every microservices team needs to adhere to that standard since it is technology-neutral. At the same time, teams should have the autonomy to select the programming language and the libraries used in their development. Another key aspect of design-time governance is the reusable aspect. Given that microservices teams are stemmed from ideas, there can be situations where certain services that are required to retrieve data for this new microservices implementation is already available via a service developed by another team.


Why Observability Is The Next Big Thing In Security

Cloud-native infrastructures and security observability are purposefully designed to remove the security speed bumps that slow innovation down, and instead, leverage a security guardrails approach that supports even faster software integration and delivery. Developers may then focus on serving the customer when they have tailored observability available—driven by automated security feedback cycles—so teams can quickly learn from mistakes and rapidly deliver value and innovation to customers. Optimizing customer experiences on the fly, for example, is just one cloud-native advantage made possible by event-driven architectures (EDAs). DevOps teams are now smartly requiring embedded security context across the development life cycle in order to understand what is going on and to help automate security of their cloud-delivered applications. Any migration into application programming interface (API) and event-driven architectures like cloud-native environments can enjoy the benefits paid forward from preexisting, automated, observable security deployed across your application development life cycle.


Why some artificial intelligence is smart until it's dumb

While practical uses get the most attention, machine learning also offers advantages for basic scientific research. In high-energy particle accelerators, such as the Large Hadron Collider near Geneva, protons smashing together produce complex streams of debris containing other subatomic particles (such as the famous Higgs boson, discovered at the LHC in 2012). With bunches containing billions of protons colliding millions of times per second, physicists must wisely choose which events are worth studying. It’s kind of like deciding which molecules to swallow while drinking from a firehose. Machine learning can help distinguish important events from background noise. Other machine algorithms can help identify particles produced in the collision debris. “Deep learning has already influenced data analysis at the LHC and sparked a new wave of collaboration between the machine learning and particle physics communities,” physicist Dan Guest and colleagues wrote in the 2018 Annual Review of Nuclear and Particle Science. Machine learning methods have been applied to data processing not only in particle physics but also in cosmology, quantum computing and other realms of fundamental physics, quantum physicist Giuseppe Carleo and colleagues point out in another recent review.



Quote for the day:

"You do not lead by hitting people over the head. That's assault, not leadership." - Dwight D. Eisenhower