Daily Tech Digest - July 06, 2020

Benefits of RPA: RPA Best Practices for successful digital transformation

A main benefit of RPA solutions is that they reduce human error while enabling employees to feel more human by engaging in conversations and assignments that are more complex but could also be more rewarding. For instance, instead of having a contact center associate enter information while also speaking with a customer, an RPA solution can automatically collect, upload, or sync data into with other systems for the associate to approve while focusing on forming an emotional connection with the customer. Another impact of RPA is it can facilitate and streamline employee onboarding and training. An RPA tool, for instance, can pre-populate forms with the new hire’s name, address, and other key data from the resume and job application form, saving the employee time. For training, RPA can conduct and capture data from training simulations, allowing a global organization to ensure all employees receive the same information in a customized and efficient manner. RPA is not for every department and it’s certainly not a panacea for retention and engagement problems. But by thinking carefully about the benefits that it offers to employees, RPA can transform workflows—making employees’ jobs less robotic and more rewarding.


Hey Alexa. Is This My Voice Or a Recording?

The idea is to quickly detect whether a command given to a device is live or is prerecorded. It's a tricky proposition given that a recorded voice has characteristics similar to a live one. "Such attacks are known as one of the easiest to perform as it simply involves recording a victim's voice," says Hyoungshick Kim, a visiting scientist to CSIRO. "This means that not only is it easy to get away with such an attack, it's also very difficult for a victim to work out what's happened." The impacts can range from using someone else's credit card details to make purchases, controlling connected devices such as smart appliances and accessing personal data such home addresses and financial data, he says. The voice-spoofing problem has been tackled by other research teams, which have come up with solutions. In 2017, 49 research teams submitted research for the ASVspoof 2017 Challenge, a project aimed at developing countermeasures for automatic speaker verification spoofing. The ASV competition produced one technology that had a low error rate compared to the others, but it was computationally expensive and complex, according to Void's research paper.


Reduce these forms of AI bias from devs and testers

Cognitive bias means that individuals think subjectively, rather than objectively, and therefore influence the design of the product they're creating. Humans filter information through their unique experience, knowledge and opinions. Development teams cannot eliminate cognitive bias in software, but they can manage it. Let's look at the biases that most frequently affect quality, and where they appear in the software development lifecycle. Use the suggested approaches to overcome cognitive biases, including AI bias, and limit their effect on software users. A person knowledgeable about a topic finds it difficult to discuss from a neutral perspective. The more the person knows, the harder neutrality becomes. That bias manifests within software development teams when experienced or exceptional team members believe that they have the best solution. Infuse the team with new members to offset some of the bias that occurs with subject matter experts. Cognitive bias often begins in backlog refinement. Preconceived notions about application design can affect team members' critical thinking. During sprint planning, teams can fall into the planning fallacy: underestimating the actual time necessary to complete a user story.


Deploying the Best of Both Worlds: Data Orchestration for Hybrid Cloud

A different approach to bridging the worlds of on-prem data centers and the growing variety of cloud computing services is offered by a company called Alluxio. From their roots at the Berkeley Amp Labs, they've been focused on solving this problem. Alluxio decided to bring the data to computing in a different way. Essentially, the technology provides an in-memory cache that nestles between cloud and on-prem environments. Think of it like a new spin on data virtualization, one that leverages an array of cloud-era advances. According to Alex Ma, director of solutions engineering at Alluxio: "We provide three key innovations around data: locality, accessibility and elasticity. This combination allows you to run hybrid cloud solutions where your data still lives in your data lake." The key, he said, is that "you can burst to the cloud for scalable analytics and machine-learning workloads where the applications have seamless access to the data and can use it as if it were local--all without having to manually orchestrate the movement or copying of that data."


Redis and open source succession planning

Speaking of the intersection of open source software development and cloud services, open source luminary Tim Bray has said, “The qualities that make people great at carving high-value software out of nothingness aren’t necessarily the ones that make them good at operations.” The same can be said of maintaining open source projects. Just because you’re an amazing software developer doesn’t mean you’ll be a great software maintainer, and vice versa. Perhaps more pertinently to the Sanfilippo example, developers may be good at both, yet not be interested in both. (By all accounts Sanfilippo has been a great maintainer, though he’s the first to say he could become a bottleneck because he liked to do much of the work himself rather than relying on others.) Sanfilippo has given open source communities a great example of how to think about “career” progression within these projects, but the same principle applies within enterprises. Some developers will thrive as managers (of people or of their code), but not all. As such, we need more companies to carve out non-management tracks for their best engineers, so developers can progress their career without leaving the code they love. 


How data science delivers value in a post-pandemic world

The uptick in the need for data science, across industries, comes with the need for data science teams. While hiring may have slowed down in the tech sector – Google slowed its hiring efforts during the pandemic – data scientists professionals are still in high demand. However, it’s important to keep a close eye on how these teams continue to evolve. One position which is increasingly in-demand as businesses become more data-driven is the role of the Algorithm Translator. This person is responsible for translating business problems into data problems and, once the data answer is found, articulating this back into an actionable solution for business leaders to apply. The Algorithm Translator must first break down the problem statement into use cases, connect these use cases with the appropriate data set, and understand any limitations on the data sources so the problem is ready to be solved with data analytics. Then, in order to translate the data answer into a business solution, the Algorithm Translator must stitch the insights from the individual use cases together to create a digestible data story that non-technical team members can put into action.


Open source contributions face friction over company IP

Why the change? Companies that have established open source programs say the most important factor is developer recruitment. "We want to have a good reputation in the open source world overall, because we're hiring technical talent," said Bloomberg's Fleming. "When developers consider working for us, we want other people in the community to say 'They've been really contributing a lot to our community the last couple years, and their patches are always really good and they provide great feedback -- that sounds like a great idea, go get a job there.'" While companies whose developers contribute code to open source produce that code on company time, the company also benefits from the labor of all the other organizations that contribute to the codebase. Making code public also forces engineers to adhere more strictly to best practices than if it were kept under wraps and helps novice developers get used to seeing clean code.


How Ekans Ransomware Targets Industrial Control Systems

The Ekans ransomware begins the attack by attempting to confirm its target. This is achieved by resolving the domain of the targeted organization and comparing this resolved domain to a specific list of IP addresses that have been preprogrammed, the researchers note. If the domain doesn't match the IP list, the ransomware aborts the attack. "If the domain/IP is not available, the routine exits," the researchers add. If the ransomware does find a match between the targeted domain and the list of approved IP addresses, Ekans then infects the domain controller on the network and runs commands to isolate the infected system by disabling the firewall, according to the report. The malware then identifies and kills running processes and deletes the shadow copies of files, which makes recovering them more difficult, Hunter and Gutierrez note. In the file stage of the attack, the malware uses RSA-based encryption to lock the target organization's data and files. It also displays a ransom note demanding an undisclosed amount in exchange for decrypting the files. If the victim fails to respond within first 48 hours, the attackers then threaten to publish their data, according to the Ekans ransom recovered by the FortiGuard researchers.


The best SSDs of 2020: Supersized 8TB SSDs are here, and they're amazing

If performance is paramount and price is no object, Intel’s Optane SSD 905P is the best SSD you can buy, full stop—though the 8TB Sabrent Rocket Q NVMe SSD discussed above is a strong contender if you need big capacities and big-time performance. Intel’s Optane drive doesn’t use traditional NAND technology like other SSDs; instead, it’s built around the futuristic 3D Xpoint technology developed by Micron and Intel. Hit that link if you want a tech deep-dive, but in practical terms, the Optane SSD 900P absolutely plows through our storage benchmarks and carries a ridiculous 8,750TBW (terabytes written) rating, compared to the roughly 200TBW offered by many NAND SSDs. If that holds true, this blazing-fast drive is basically immortal—and it looks damned good, too. But you pay for the privilege of bleeding edge performance. Intel’s Optane SSD 905P costs $600 for a 480GB version and $1,250 for a 1.5TB model, with several additional options available in both the U.2 and PCI-E add-in-card form factors. That’s significantly more expensive than even NVMe SSDs—and like those, the benefits of Intel’s SSD will be most obvious to people who move large amounts of data around regularly.


SRE: A Human Approach to Systems

Failure will happen, incidents will occur, and SLOs will be breached. These things may be difficult to face, but part of adopting SRE is to acknowledge that they are the norm. Systems are made by humans, and humans are imperfect. What’s important is learning from these failures and celebrating the opportunity to grow. One way to foster this culture is to prioritize psychological safety in the workplace. The power of safety is very obvious but often overlooked. Industry thought leaders like Gene Kim have been promoting the importance of feeling safe to fail. He addresses the issue of psychological insecurity in his novel, “The Unicorn Project.” Main character Maxine has been shunted from a highly-functional team to Project Phoenix, where mistakes are punishable by firing. Gene writes “She’s [Maxine] seen the corrosive effects that a culture of fear creates, where mistakes are routinely punished and scapegoats fired. Punishing failure and ‘shooting the messenger’ only cause people to hide their mistakes, and eventually, all desire to innovate is completely extinguished.”



Quote for the day:

"Education: the path from cocky ignorance to miserable uncertainty." -- Mark Twain

Daily Tech Digest - July 05, 2020

How Cryptocurrency Funds Work

This is generally the largest risk involved with investing in a cryptocurrency fund: clients need to put their trust into those behind it, which is why it is important to do research. The more information the managers are willing to share about who they are, how they are managing and what their track record is can help determine if they are right for an investor. That’s why, for many, partnering with a reputable firm is an essential part of the trust that they will see a return on their investment. Some of the biggest names in cryptocurrency funds include the Digital Currency Group, Galaxy Digital and Pantera Capital, among many others. All focus specifically on cryptocurrencies and other digital assets. Of course, these will still generally require large, upfront investments from qualified individuals. However, retail investors who want to be in on this type of action might want to look at projects like Tokenbox. In addition to acting as a general wallet and exchange, Tokenbox allows users to “tokenize” their portfolios as well as invest in the tokens attached to the portfolios of others. This acts as a streamlined way to either begin a new cryptocurrency fund or get involved in an existing one.


How DevOps teams can get more from open source tools

Open source tools can be a key first step on the DevOps path to achieving software development’s nirvana state, but only when teams bring automation and speed across the various steps of the process. That’s why professionals refer to a DevOps “toolchain” (the products you use) that supports the software “pipeline” (the process of delivering software) — and visually depict these elements as unfolding in a horizontal fashion. End-to-end tool coverage horizontally across an organization is the key to highly functional, mature DevOps practices. However, that’s easier said than done — and has traditionally been both expensive and difficult for businesses to do. The good news today is that there are many more open-source options across every sequential step of the software delivery lifecycle (SDLC). From managing source code, to storing build artifacts, release monitoring and finally to deployment — there’s an OSS solution for that if you know where to look. ... Perhaps less obvious is the notion that DevOps teams must think about tool coverage and instrumentation for a vertical stack, which at a basic level breaks down into code, infrastructure, and data layers.


5G reinvented: The longer, rougher road toward ubiquity

There are two 5Gs, and that is by design. The architecture that purges the network of all radio and communications components and methods from the past, while maintaining compatibility with older devices (user equipment, or UE) is called 5G Stand-Alone (5G SA). Release 16 of the 3GPP engineers' architecture for global wireless communications, is being formally ratified and finalized on July 3. It was delayed on account of the pandemic, but only by a handful of months. 3GPP R16 is the second round of 5G technologies, in a series that has at least one more round devoted to 5G, most likely two. The other 5G architecture is the one in use today in the United States: 5G Non-Stand Alone (5G NSA). It relies on the underlying foundation and existing base station structure of 4G LTE. By building 5G services and service levels literally into crowns that reside above or below the 4G buildouts (a "crown castle," which also happens to be the name of one of North America's largest owners of telco tower real estate), 4G has been giving 5G a leg up. Once it's found its footing, the idea is that 4G can begin winding down.


Robotic Process Automation: 6 common misconceptions

RPA is best for activities that require multiple repetitions of the same sequence and could be conducted in parallel to create greater efficiencies. For example, B2B companies often have to check several portals or suppliers in order to buy inventory at the best rate. An employee would have to work through all the steps in each portal sequentially. But with RPA, the software robots act as “digital colleagues”. They monitor product prices and regularly inform employees about changes, retrieving figures from all portals simultaneously. Unlike BPM platforms, RPA isn’t capable of managing processes end-to-end over a longer period of time. An example: A customer wants to order something, complain or obtain information. Accordingly, a process is triggered in the company. Sometimes it can take up to 14 days until the request is completed. Although the digital colleague can support the employee by retrieving data on the customer, decisions are still made by the individual. That’s why a BPM solution is the much better choice, because the system can integrate employees into the process depending on availability and skills.


Remote workforce demands ‘hybrid working’, not the end of the office in the ‘better normal’

The study revealed universal approval of flexible working, across business structures and geographies, across generations and parental status. This, said Adecco, was a clear affirmation that the world is ready for hybrid working. Almost 80% of respondents thought it important that their company implements more flexibility in how and where staff can work. And it was not only employees who saw the benefits of this. Just over three-quarters (77%) of C-level/executive managers thought business will generally benefit from allowing increased flexibility around office and remote working. Also, 79% of C-level/executive management said they thought employees would benefit personally from having increased flexibility around office and remote working. Four-fifth of workers said it was important to be able to maintain a good work/life balance after the pandemic, and 50% said their work/life balance had improved during the lockdown. However, UK employees worry that their employer’s expectation of what hybrid working should look like after the pandemic will not match their own. 


UNICEF turning to cryptocurrency in fight against Covid-19

The CryptoFund is aimed at supplementing this initiative to help companies specifically address the challenges created by the Covid-19 pandemic, which has brought to a head the problems that UNICEF’s funds are seeking to tackle, such as food supply and education. Investees have sought to mitigate some of the damaging effects of the pandemic on children through collaboration with governments and other local organisations in tracking delivery of food, offering remote learning and tending to other problems caused by lockdown and isolation. Among the companies receiving 125 ether are StaTwig from India which is piloting a blockchain-based app designed to track the delivery of rice to impoverished communities and Utopic from Chile which aims to help improve children’s literacy from their homes using a WebVR-powered learning game. “We’re making investments into emerging technologies across data science, virtual reality and blockchain,” says Lamazzo, “but we’re also looking at the modality of the funding with the startups and trying to understand its benefits and drawbacks, so we’re going through this learning process together.”


How CTOs Can Innovate Through Disruption in 2020

Disruption is nothing new for technology leaders. In Gartner's survey of IT leaders, conducted in early 2020 before the coronavirus pandemic struck, 90% said they had faced a "turn" or disruption in the last 4 years, and 100% said they face ongoing disruption and uncertainty. The current crisis may just be the biggest test of the resiliency they have developed in response to those challenges. "We are hearing from a lot of clients about innovation budgets being slashed, but it's really important not to throw innovation out the window," said Gartner senior principal analyst Samantha Searle, one of the report's authors, who spoke to InformationWeek. "Innovation techniques are well-suited to reducing uncertainty. This is critical in a crisis." The impact of the crisis on your technology budget is likely dependent on your industry, Searle said. For instance, technology and financial companies tend to be farther ahead of other companies when it comes to response to the crisis and consideration of investments for the future. Other businesses, such as retail and hospitality, just now may be considering how to reopen.


Shadow IT: It's Nothing Personal

One of the things I still hear a lot from IT leaders, from small companies to large corporations, is that shadow IT is a big issue that causes them headaches. If you are not familiar with the term, shadow IT is a description of when departments go outside of an IT department to obtain products or services traditionally controlled by a centralized IT group, such as obtaining software-as-a-service or obtaining devices. IT leaders bemoan the behavior that is causing departments to “go around IT” or “not follow the rules,” and often take the position that it’s simply bad behavior or some kind of vendetta against IT. More often than not, however, they fail to internalize and analyze the real cause of the phenomenon: It’s easier/cheaper/better to do business with other organizations. On occasion, they even get upset when I make this suggestion — at least until they stop and think carefully about what I’ve said. This is nothing personal. Departments, when trying to accomplish their essential business purpose, are, frankly, obligated to look for the best competitive solutions. It’s solely about doing smart business.



Shining a Low-Code Light on Shadow IT

Shadow apps are not, in themselves, a bad thing. Many of these systems fulfill a valid need and play a role in the success and or survival of the organisation. Some IT departments are now openly recognising this and seeking to bring the alleged ‘rebels’ back into the IT fold. What IT really needsto achieve this, is a technology approach that helps them deliver on these requirements at speed; technology that means that they no longer have to say ‘no’ or ‘yes, but later’ in response to requests from the business. Enabling IT to be agile by using ‘low-code’ rapid application development tools to build apps at high speed, can overcome the bottlenecks. So instead of outlawing Shadow IT ideas, this new approach recognizes and utilizes their creativity. Low-code platforms, such as those offered by LANSA, provide the kind of prototyping capabilities needed to validate business needs, direct with the users, iterate as they formalize their requirements, then speed up the final development way beyond the timescales they have been used to. The resulting apps are robust, well architected, high performance, and, importantly, managed and easily maintained by IT. 


Robotic Process Automation in legal - a bright future

If we are going to be precise, we should put AI as a subset of robotic process automation. Artificial intelligence is frequently associated with robots and bots in the broader sense. But it is commonly misunderstood by the public, and, by extension, lawyers. At times, it's likely over-glorified by legal tech companies, pundits, and publications. John McCarthy created the term AI in 1956. He used it to label machines that mirror certain human cognitive traits (i.e., learning, thinking, remembering, problem-solving, and making decisions). In essence, artificial intelligence represents machines (algorithms) that can analyze vast bodies of data, learn, and correct their behavior in the process. As such, artificial intelligence depends quite a lot on the quality of data. You can't have good learning if data is sparse, or if samples aren't representative. So far, the necessity of training and data quality (or its availability in the first place) represented significant barriers to the adoption of AI in the legal industry.



Quote for the day:

"Every great leader can take you back to a defining moment when they decided to lead" -- John Paul Warren

Daily Tech Digest - July 04, 2020

What are IT pros concerned about in the new normal? Security and flexibility

What's also interesting is, despite this workload increase, the majority (77%) feel they have been very effective at supporting employees working from home. This is great to hear, and not entirely surprising, as these companies rely on SaaS to run their businesses. On the flip side, laggards running legacy infrastructures have seen productivity go to zero. This is definitely a tipping point for the adoption of SaaS. Our survey also reinforces this sentiment, as 47 percent of respondents said they will increase the use of SaaS as a result of the pandemic. ... IT teams at every company we work with have had to implement new processes to support the entire employee base, leveraging and adjusting methods, tools and processes to enable business continuity with nearly 100% work-from-home workforce. Work from home is not a new concept, but supporting traditional remote laptop users is not the same challenge as supporting desktop users that may not be using corporate-issued devices and computers. Companies were forced to immediately implement new processes for the entire employee base, leveraging methods that were effective for laptop users who were already effective remote users.


Singapore banks set to fast-track digital transformation due to COVID-19

As banks re-evaluate their digital strategies, it only makes sense to ensure compliance is automated in order to easily and efficiently adhere to all AML, KYC and CTF regulations. Regulation technologies, which use Artificial Intelligence (AI), are particularly valuable when it comes to automating compliance. AI can help mine huge volumes of data, automatically flagging risk-relevant facts faster than humanly possible. AI technology dramatically speeds up the onboarding phase. The technology helps to automatically identify illicit client relationships and alert financial institutions to the possibility of criminal or terrorist activity. With regulatory requirements being constantly updated, it can be difficult for banks to keep on top of these changes via manual processes alone. By implementing AI technology, financial institutions are better able to identify gaps in customer information, with the technology automatically prompting them to perform regulatory outreach to collect the outstanding information – a far more streamlined and hands-off approach to what many banks in Singapore are currently using.


Top ten myths of technology modernization in insurance

Modernization simply means replacing the core platform with the best-in-class option. The reality: Core-platform replacements often have higher up-front investment costs than in-place IT modernization, as they require both software and hardware, experts’ time, and extensive testing. Furthermore, migrating existing policies and their implicit contracts to a new platform is often expensive—these additional costs need to be factored into any decision—and time consuming. One big reason for high modernization costs is the age and quality of the policy data and rules—poorly maintained policies are expensive to refresh and modernize to work in a new system. Product types and geographic context are also considerations. For instance, US personal property and casualty (P&C) policies are generally issued annually and thus have up-to-date policy data and rules; this makes migration efforts more straightforward. By contrast, in countries such as Austria or Germany, policies are refreshed annually to adjust premiums for inflation, but policy data, rules, and terms only change when a customer switches to a new policy—which may not happen for many years. Therefore, policy rules need to be carried over to the target system or customers need to switch to a new policy during modernization, rendering it time consuming.


Microsoft Defender ATP now rates your security configurations

Microsoft promises the data in the score card is the product of "meticulous and ongoing vulnerability discovery", which involves, for example, comparing collected configurations with collected benchmarks, and collecting best-practice benchmarks from vendors, security feeds, and internal research teams. Defender ATP users will see a list of recommendations based on what the scan finds. It contains the issue, such as whether a built-in administrator account has been disabled, the version of Windows 10 or Windows Server scanned, and a description of the potential risks. For this particular risk, Microsoft explains that the built-in administrator account is a favorite target for password-guessing, brute-force attacks and other techniques, generally after a security breach has already occurred. Defender ATP also provides the number of accounts exposed on the network and an impact score. Users can export a checklist of remediations to be undertaken in CSV format for sharing with team members and to ensure the measures are undertaken at the appropriate time. An organization's security score should improve once remediations are completed.


Working with Complex Data Models

Physical data models present an image of a data design that has been implemented, or is going to be implemented, in a database management system. It is a database-specific model representing relational data objects (columns, tables, primary and foreign keys), as well as their relationships. Also, physical data models can generate DDL (or data definition language) statements, which are then sent to the database server. Implementing a physical data model requires a good understanding of the characteristics and performance parameters of the database system. For example, when working with a relational database, it is necessary to understand how the columns, tables, and relationships between the columns and tables are organized. Regardless of the type of database (columnar, multidimensional, or some other type of database), understanding the specifics of the DBMS is crucial to integrating the model. According to Pascal Desmarets, Founder and CEO of Hackolade: “Historically, physical Data Modeling has been generally focused on the design of single relational databases, with DDL statements as the expected artifact. Those statements tended to be fairly generic, with fairly minor differences in functionality and SQL dialects between the different vendors. ...”


'Machine' examines Artificial Intelligence and asks, 'Are we screwed?'

These AI systems are trained on huge amounts of data and you'll find bias when, say there's facial recognition. If all your facial recognition data set is Caucasians, it's going to have trouble identifying people by their races. And being misidentified by facial recognition is not a good thing when it comes to law enforcement, other things like this. So, we're finding, even through the course of making the film, this technology moves so fast, but we've seen a lot being done to address the problem of bias in data sets since we started. And they're finding that more diversity within these data sets actually has helped reduce bias in a lot of these algorithms, which is a positive sign. But at the end of the day, I think we're still at the point where we don't want to give these algorithms too much control. I think there needs to be humans in the loop that understand ethics and not everything in life boils down to zeros and ones, and Xs and Os. So, I think it's good to have humans in the loop and also society in the loop, not just the people designing these technologies, but society as a whole should be hip to what's going on. Because if not, you're going to wake up in 20 years and going to be living in a very different world, I think.


Pandemic reveals opportunities for 5G connectivity

Because 5G technology can now be cloud orchestrated—that is, use software-defined principles to manage the interconnections and interactions among workloads on public and private cloud infrastructure—the behavior of the 5G network can be changed to accommodate specific applications for specific uses. Roese shared a dramatic example of this by describing a telehealth scenario in which suspected stroke victims could be diagnosed and receive initial treatment while en route to the hospital. This would be accomplished by using the continuous collection and streaming of patient data. “In order to do that, a whole bunch of conditions had to be true,” said Roese. “You had to push the code out to an edge, so it can operate in real time. You had to execute a network slice to guarantee the bandwidth and give this a priority service.” If such allocation were done manually, it might take three hours or more to reconfigure the network. One thing that makes mobile triage possible is strength at the edge of the cellular network. That is also crucial for innovation—as well as for the average 5G user. “What that means is you’re walking around in a city and if you constantly get 100-to-200 megabits per second, the peak rates might be five-to-10 gigabits per second,”


Design Patterns — Zero to Hero — Factory Pattern

Before moving into the explanation part we need to have a clear understanding of concrete class. A class that has an implementation for all of its methods is called Concrete class. They cannot have any unimplemented methods. The concrete class can extend the Abstract class or an interface as long as its implements all the methods of those too. Simply, we can say that all classes which are not Abstract class are considered as Concrete classes. Actually, according to Head First Design Patterns, Simple Factory is not considered as a Design Pattern. Let’s get started understanding the Factory Pattern varieties. The Simple Factory Pattern describes a way of instantiating class using a method with a large conditional that based on method parameters to choose which product class to instantiate and then return. Let’s dive into the coding example where the Simple Factory Pattern comes into play. Imagine a scenario, where we have different brands of smartphones. You need to take the specification details of the respective brands where the brand name is passed as a parameter through the client code.


Evolution of Voice-activated Banking

Instead of having to call up customer care representatives and waiting to get their queries resolved, consumers should be able to quickly get relevant information simply by asking. The financial services industry is addressing the one-click, on-the-go behaviour of consumers by launching various innovative solutions, such as mobile wallets, which have become a highly convenient method of payment; and chatbots, which have become very popular. Banks are constantly looking to enhance customer experience by providing ways to customers to get the desired information as and when they want. The opportunity lies in integrating all branch transactional activities with voice technology. Currently, voice assistants handle basic customer queries, such as checking account balances, making payments, paying bills and getting account-related information. The simple nature of these requests enables institutions to instantly provide the right information at the right time; however, this is unlikely to provide a competitive advantage in future. Companies that reimagine the customer journey across channels, products, and services with end-to-end integration, will emerge as winners.


Fintech In Banking: New Standards For The Financial Sector

Distributed leader technologies, widely known as blockchains, have already moved from the shade of public interest and now are treated as paradigm-changing technologies that turn the interaction between Fintech and banks upside down. The research held by Accenture shows that 9 in 10 executives are considering the implementation of blockchain technology into their financial services. Blockchain aims at boosting mutual benefits and reducing business risks from collaboration and mutual Fintech investment banking. Using a decentralized database, banks receive an opportunity to work together on a common solution, keeping their own data security and opening certain pieces of data only when they want to interact and trade. It ensures complete transparency and real-time execution of payments what significantly minimizes the possibility of cyber-attacks as the information doesn’t exist in a centralized database anymore. Blockchain technology is also very helpful in KYC (Know Your Customer) Compliance. In traditional banking, it usually causes delays to banking transactions, entails substantial duplication of effort between banks and third parties, and ends up at high costs. 




Quote for the day:

"When you expect the best from people, you will often see more in them than they see in themselves." -- Mark Miller

Daily Tech Digest - July 03, 2020

Designing data governance that delivers value

Without quality-assuring governance, companies not only miss out on data-driven opportunities; they waste resources. Data processing and cleanup can consume more than half of an analytics team’s time, including that of highly paid data scientists, which limits scalability and frustrates employees. Indeed, the productivity of employees across the organization can suffer: respondents to our 2019 Global Data Transformation Survey reported that an average of 30 percent of their total enterprise time was spent on non-value-added tasks because of poor data quality and availability ... The first step is for the DMO to engage with the C-suite to understand their needs, highlight the current data challenges and limitations, and explain the role of data governance. The next step is to form a data-governance council within senior management (including, in some organizations, leaders from the C-suite itself), which will steer the governance strategy toward business needs and oversee and approve initiatives to drive improvement—for example, the appropriate design and deployment of an enterprise data lake—in concert with the DMO. The DMO and the governance council should then work to define a set of data domains and select the business executives to lead them.


How to Kill Your Developer Productivity

The problems start when teams get carried away with microservices and take the "micro" a little too seriously. From a tooling perspective you will now have to deal with a lot more yml files, docker files, with dependencies between variables of these services, routing issues, etc. They need to be maintained, updated, cared for. Your CI/CD setup as well as your organizational structure and probably your headcount needs a revamp. If you go into microservices for whatever reason, make sure you plan sufficient time to restructure your tooling setup and workflow. Just count the number of scripts in various places you need to maintain. ... Kubernetes worst case: Colleague XY really wanted to get his hands dirty and found a starter guide online. They set up a cluster on bare-metal and it worked great with the test-app. They then started migrating the first application and asked their colleagues to start interacting with the cluster using kubectl. Half of the team is now preoccupied with learning this new technology. The poor person that is now maintaining the cluster will be full time on this the second the first production workload hits the fan.


A Brief History of Data Lakes

Data Lakes are consolidated, centralized storage areas for raw, unstructured, semi-structured, and structured data, taken from multiple sources and lacking a predefined schema. Data Lakes have been created to save data that “may have value.” The value of data and the insights that can be gained from it are unknowns and can vary with the questions being asked and the research being done. It should be noted that without a screening process, Data Lakes can support “data hoarding.” A poorly organized Data Lake is referred to as a Data Swamp. Data Lakes allow Data Scientists to mine and analyze large amounts of Big Data. Big Data, which was used for years without an official name, was labeled by Roger Magoulas in 2005. He was describing a large amount of data that seemed impossible to manage or research using the traditional SQL tools available at the time. Hadoop (2008) provided the search engine needed for locating and processing unstructured data on a massive scale, opening the door for Big Data research. In October of 2010, James Dixon, founder and former CTO of Pentaho, came up with the term “Data Lake.” Dixon argued Data Marts come with several problems, ranging from size restrictions to narrow research parameters.


What is agile enterprise architecture?

An important group of agility dimensions relates to the process of strategic planning, where business leaders and architects collectively develop the global future course of action for business and IT. One of these dimensions is the overall amount of time and effort devoted to strategic planning. Some companies invest considerable resources in the discussions of their future evolution, while other companies pay much less attention to these questions. Another dimension is the organisational scope covered by strategic planning. Some companies embrace all their business units and areas in their long-range planning efforts, while others intentionally limit the scope of these efforts to a small number of core business areas. A related dimension is the horizon of strategic planning. Some organisations plan for no more than 2-3 years ahead, but others need a five-year, or even longer, planning horizons. Yet another relevant dimension is how the desired future is defined. Some companies create rather concrete descriptions of their target states, when others define their future only in terms of planned initiatives in investment roadmaps.


How to Guard Against Governance Risks Due to Shadow IT and Remote Work

Shadow IT evolves in organizations when workers, teams, or entire departments begin to improvise their work processes through unauthorized services or practices that operate outside the oversight and control of IT. It may involve something as seemingly harmless as storing work documents on a personal laptop, or it could pose a catastrophic risk by transferring confidential intellectual property or regulated private data via an unsecured personal file sharing service. ... Although productivity is critical, the use of personal cloud file services, ad hoc team network file shares, and personal email for file transfer undermine governance and represent material risk from a discovery, privacy, and noncompliance perspective. Without equipping your employees with productivity tools that address governance requirements, they pursue novel techniques without understanding the risks. Transferring documents via email, Dropbox, or Google Drive may seem ingenious; in reality, users may not understand the dangers posed by insufficient authentication or auditing or the direct violation of data privacy requirements. What's more, unmanaged deletion of work product may violate legal hold requirements.


How to Convince Stakeholders That Data Governance is Necessary

Often times, the data consumers don’t have an inventory of the data available to them. The consumers don’t have business glossaries, data dictionaries and data catalogs that house information about the data that will improve their understanding of the data (and access to the metadata might be a problem even if it is available). They don’t immediately know who to reach out to to request access to the data (that they may not know exists in the first place). And the rules associated with the data are not documented in resources that are available to data consumers, thus putting all of this effort, post hoop-jumping, at risk anyway. If you ask data consumers, casual data users, and data scientists what causes delays and problems completing their normal job, you can expect to get answers listed in the previous paragraph, that will boggle your mind. At that point, you will begin to understand the often mentioned 80/20 rule. This rule states that eighty percent of their time is spent data wrangling and the other twenty percent is spent actually doing the analysis, meaningful reporting and answering questions that is truly a part of their job.


Studying an 'Invisible God' Hacker: Could You Stop 'Fxmsp'?

Experts say the group was extremely well-organized and used teams of specialists, built a sophisticated botnet and sold remote access and exfiltrated data in the course of perfecting the botnet to help monetize those efforts. Or at least that was the group's MO until AdvIntel dropped a report in May 2019 documenting Fxmsp's activities. Shining a light on the gang - which relied in large part on advertising via publicly accessible cybercrime forums - caused the group to disappear. "The Fxmsp hacking collective was explicitly reliant on the publicity of their offers in the dark market auctions and underground communities," Yelisey Boguslavskiy, CEO of AdvIntel, tells me. After the report's release, he says Fxmsp disappeared from public view, although it's not clear if the hacker with that handle might still be operating privately. Study Fxmsp's historical operations, and a less-is-more ethos emerges. "In most cases, Fxmsp uses a very simple, yet effective approach: He scans a range of IP addresses for certain open ports to identify open RDP ports, particularly 3389. Then, he carries out brute-force attacks on the victim's server to guess the RDP password," Group-IB says in a recap.


4 common software maintenance models and when to use them

Quick-fix: In this model, you simply make a change without considering efficiency, cost or possible future work. The quick-fix model fits emergency maintenance only. Development policies should forbid the use of this model for any other maintenance motives. Consider forming a special team dedicated to emergency software maintenance. ... Iterative: Use this model for scheduled maintenance or small-scale application modernization. The business justification for changes should either already exist or be unnecessary. The iterative model only gets the development team involved. The biggest risk here is that it doesn't include business justifications -- the software team won't know if larger changes are needed in the future. The iterative model treats the application target as a known quantity. ... Reuse: Similar to the iterative model, the reuse model includes the mandate to build, and then reuse, software components. These components can work in multiple places or applications. Some organizations equate this model to componentized iteration, but that's an oversimplification; the goal here is to create reusable components, which are then made available to all projects under all maintenance models. 


Newly discovered principle reveals how adversarial training can perform robust deep learning

Why do we have adversarial examples? Deep learning models consist of large-scale neural networks with millions of parameters. Due to the inherent complexity of these networks, one school of researchers believe in a “cursed” result: deep learning models tend to fit the data in an overly complicated way so that, for every training or testing example, there exist small perturbations that change the network output drastically. This is illustrated in Figure 2. In contrast, another school of researchers hold that the high complexity of the network is a “blessing”: robustness against small perturbations can only be achieved when high-complexity, non-convex neural networks are used instead of traditional linear models. This is illustrated in Figure 3. It remains unclear whether the high complexity of neural networks is a “curse” or a “blessing” for the purpose of robust machine learning. Nevertheless, both schools agree that adversarial examples are ubiquitous, even for well-trained, well-generalizing neural networks.


AI Adoption – Data governance must take precedence

Obstacles are to be expected on the path to digital transformation, particularly with unfamiliar entities in the mix. For AI adoption, the most prevalent obstructions are: a company culture that doesn’t recognise a need for AI, difficulties in identifying business use cases, a skills gap or difficulty hiring and retaining staff and a lack of data or data quality issues. With this broad spectrum of challenges, it is worth delving into a couple of them. Firstly, it is interesting to note that an incompatible company culture mostly affects those companies that are in the evaluation stage with AI. When rephrased, perhaps it is obvious – a company with “mature” AI practices is 50 percent less likely to see no use for AI. By contrast, in a company where AI is not yet an integrated business function, resistance is more likely. Secondly, AI adopters are more likely to encounter data quality issues; by virtue of working closely with data and requiring good data practice, they are more likely to notice when errors and inconsistencies arise. Conversely, companies in the evaluating stages of AI adoption may not be aware of the extent of any data issues.



Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." -- Orrin Woodward

Daily Tech Digest - July 02, 2020

Israel Finally Readying a Fintech ‘Regulatory Sandbox’

A draft of the law calls for the sandbox, formally called an Experimental Environment, to be operated by a committee comprising officials from the Bank of Israel’s banks supervision division, the Capital Markets, Savings and Insurance Authority, the Israel Securities Authority and the Anti-Money Laundering Authority. It will have the authority to create a “regulatory playground” of up to two years, with the option of extending it for a second two years. The sandbox will offer two tracks to participating companies – a licensing track for firms that need approvals from one or more regulators and an escort track for all others. Companies in the licensing track will be able to apply to regulators to awards them adjusted or less stringent regulations for a limited period of time. One example of a less stringent rule would be to drop the requirement for a minimum number of clients. Firms in the escort track will benefit mainly from easier terms for meeting anti-money laundering rules. Finance Ministry officials said they hope this will lower the risk startups assume vis-a-vis the law and enable the Bank of Israel to ensure they get access to banking services.


The importance of 5G, AI and embracing new technologies in a post-Covid world

AI remains an ever-developing technology where the potential is still being realised with smart factories, smart farms and smart cities soon to become the norm in the coming years. A smooth transition to an AI-enhanced workplace will involve frontline staff to identify those tasks best suited to automation, empowering them to contribute to making a difference in their business. AI-powered machines will be able to interpret the real world in the same context as we can. One such application will be to help autonomous vehicles navigate poor road and weather conditions, which will make a potentially huge difference to road safety. AI will allow businesses to boost productivity, increase agility and flexibility, spur innovation and be the root of digital transformation. AI is not just about robots, computing and smart factories, it’s also about real applications in people’s everyday lives. For example, Huawei has developed StorySign, a mobile application to help deaf children learn to read in a fun and engaging way. It is a global initiative and in Ireland, the company worked with the Irish Deaf Society to help develop it for the Irish market because technology should be used to encourage digital inclusion for all.


Vulnerable drivers can enable crippling attacks against ATMs and POS systems

As part of their research project, the Eclypsium researchers found a vulnerability in a driver used in an ATM model from Diebold Nixdorf, one of the largest manufacturers of devices for the banking and retail sectors. The driver enables applications to access the various x86 I/O ports of such a system. ATMs are essentially computers with specialized peripherals like the card reader, PIN pad, network interfaces or the cash cassettes that are connected through various communication ports. By gaining access to the I/O ports through the vulnerable driver, an attacker can potentially read data exchanged between the ATM's central computer and the PCI-connected devices. Moreover, this driver can be used to update the BIOS, the low-level firmware of a computer that starts before the operating system and initializes the hardware components. By exploiting this functionality, an attacker could deploy a BIOS rootkit that would survive OS reinstallations, leading to a highly persistent attack. To the researchers' knowledge, the vulnerability hasn't been exploited in any real-world attack, but based on their discussions with Diebold, they believe the same driver is used in other ATM models as well as POS systems.


Lessons from COVID-19 Cyberattacks: Where Do We Go Next?

One thing that's interesting to note is that we haven't seen a lot of shift in terms of innovative or novel techniques and tricks. While approaches have certainly been sophisticated, bad actors have tended to rely on old standards (such as social engineering and ransomware). That's because if the old tricks still work, they aren't likely to change tactics until they see their success rate dropping. Cybercriminals are leveraging well-known advanced attack techniques and layers of obfuscation — which means they have a decent likelihood of breaking into networks and should be treated accordingly. Again, it all goes back to the heightened sense of fear and anxiety that the pandemic has ushered in. Bad actors are all too aware that when people's guards are down, they may not be practicing best-in-class cyber hygiene. The importance of due diligence cannot be stressed enough. Some might argue that too much caution can be counterproductive, but it's certainly less counterproductive than having your entire company shut down because someone didn't double and triple check before clicking that file.


Android security: This fake message about a missed delivery leads to data-stealing malware

The fake applications are built using WebView and designed to look like the real thing. After the application is downloaded – which requires the user to allow installation from unknown sources - the fake page will redirect to the legitimate website in an effort to help stop the victim being suspicious about what they've just downloaded. The malware also asks for a number of permissions it requires to operate – but given so many legitimate applications ask for extensive use of the device anyway, the victim is unlikely to give it a second thought. Once installed, FakeSpy can monitor the device to steal various forms of information, including name, phone number, contacts, bank and cryptocurrency wallet details, as well as monitoring text messages and app usage. FakeSpy also exploits the infection to spread itself, sending the postal-themed phishing message to all victim's contacts, indicating this isn't a targeted campaign, a financially driven cyber-criminal operation looking to spread as far and wide as possible with the aim of making as much money as possible from stolen bank information and other personal credentials.


How Edge Computing and 5G Work Together

Ericsson’s Head of Marketing and Communications for Networks, Cecilia Atterwall, says that 5G will unleash new ways of solving problems. She also adds that “it’s a combination of devices, content, 5G access networks, edge computing and high-performance distributed 5G core capabilities that make these innovations possible.” It’s not an understatement to say that everyone relies on edge computing in one way or another, if not already, then at least in the near future and going forward. However, it’s definitely grown to be an absolute necessity for many key industries and even autonomous vehicles. For example, edge computing is utilized for industrial manufacturing purposes, within smart cities, AI, and even self-driving cars. The reason behind its use and importance boils down to its ability to assist IoT devices in low-bandwidth environments, ensuring that data is processed as quickly as possible. Reducing network latency is especially crucial when it comes to the computing processes behind the successful operation of self-driving cars. For example, Tesla cars are equipped with computers that process the data obtained by the vehicle’s sensors — allowing for this technology to function on a split-second basis.


Why is Site Reliability Engineering Important?

“The term SRE surely has been introduced by Google, but directly or indirectly several companies have been doing stuff related to SRE for a long time, though I must say that Google gave it a new direction after coining the term ‘SRE.’ I have a clear view on SRE as I believe it walks hand-in-hand with DevOps. All your infrastructure, operations, monitoring, performance, scalability and reliability factors are accounted for in a nice, lean and automated system (preferably); however this is not enough. Culture is an important aspect driving the SRE aspects, along with business needs. As the norm ‘to each, his own’ goes, SRE is no different. It is easy to get inspired from pioneer companies, but it’s impossible to copy their culture and means to replicate the success, especially with your ‘anti-patterns’ and ‘traditional’ remedial baggage. Do you have similar infrastructure and business needs as the company showcasing brilliant success with SRE? No. Can it help you? Absolutely. The key factor here is to recognize what is important to your success blueprint after understanding the fundamentals of it and find your own success factors considering your cultural needs. Your strategy and culture need to walk together, just like your guiding (strategy) and driving (culture) factors.”


IT Career Paths You May Not Have Considered

Data analytics, DevOps, artificial intelligence and intelligent automation are just a few of the other possibilities. "You don't need to leave IT to leave IT," said Rials. "AI is a path I'd recommend for seasoned IT professionals. I think more people are on the green side and they're struggling versus a seasoned IT professional who can offer some insights." Cloud vendors are constantly innovating, so whatever skills you have now are probably very narrow compared to tomorrow's possibilities. In addition to IaaS-related roles, there are many other options including cloud-first application development (platform as a service), AI and machine learning, autonomous systems, robotics, cloud security, serverless architectures, cloud migration, and cloud engineering. Cloud is also a great launching pad for a new venture if you're so inclined. You can run, but you can't hide. Business and technology have become so interdependent that no matter how far you move away from IT, it will always find you. Of course, that's not to say you can't change your role. ... "I've seen people who said, 'I want to leave IT, I'm done,' and even though they may have become a project manager or the manager of another department, everyone knows they're still the technology expert, which is not a bad thing."


Cisco bumps up ISR/ASR router performance and capacity

The new ASR ESP-X module features the third generation of Cisco’s Quantum Flow processor, a Layer 3 forwarding ASIC. The ESP-X provides customers more than 265 Gbps of IPv4 and IPv6 throughput, along with IPSec that is more than 2X better performing than previous generations of the processor, according to Vitalone. Cisco ASR 1000s typically reside at the WAN edge of an enterprise data center or large office, as well as in service provider Points of Presence (POPs). The routers use the ESPs to aggregate multiple traffic flows and network services, including encryption and traffic management, and forward them across WAN connections at line speeds. The ESP-aX can reach more than 2X better scale compared to previous generations for classic network address translation (NAT), carrier-grade NAT and zone based firewall, an important capability for edge locations that experience bandwidth demands in great bursts or waves, Vitalone said. Cisco also introduced the 1100 Terminal Services Gateway, a secure remote console for customers needing Out of Band Management tools. Like the ASR devices, the 1100 runs Cisco’s IOS XE software and lets customers securely manage a variety of networking, compute, internet of things (IoT), and other devices.


How Outsourcing Practices Are Changing in 2020: An Industry Insight

Co-sourcing is an approach where companies hire an external team that acts as their internal team and the two parties work in collaboration. Both the internal and external teams work together, side-by-side to create value. Together, they share the risks, face issues, and come up with quick solutions. Motivating both the parties, co-sourcing will help improve the IT outcome achieved from outsourcing. This approach vests their interests in co-creating new values to gain a competitive edge. Even during unrest, they can easily go through their contract and ensure that the work is not hampered. Their interests will be focused on the outcome of the collaboration and not in either completing the task for the client or just delegating a task for completion. The IT sector is erupting with new advanced digital products created by two organizations coming together from different parts of the globe. They can delegate the development of their most important IT projects, enterprise architecture, or other core competencies to the external team while keeping the management at their end. More focus is on delivering a product that delivers profits for both parties.



Quote for the day:

"Leadership does not always wear the harness of compromise." -- Woodrow Wilson

Daily Tech Digest - July 01, 2020

The Future Of Work Is Not What You Think

Technology in its broadest sense is having profound impacts on society – both good and bad – as it always has. Much of the research points towards an acceleration of technology impact and much more profound structural changes than before. The growth of platforms, ecosystems, what I call ‘self’ technologies – those technologies that complete actions ‘by themselves’ (AI, ML, RPA, Blockchain, Nano, Smart etc) – along with advances in biotechnology are already having strong impacts to how society works, and how work works. The explosion of data and the promise of quantifying everything is already creating new challenges around privacy and what is appropriate to track and measure. Many of these technologies are creating fundamental contradictions that are new and will need significant creative thinking in how best to extract true net benefits. Platform technologies supported by growing ecosystems are fully enabling sustainable ‘Self Careers’. A plethora of tools are widely available for individuals to deliver quality output, design their employment journeys and create their own portfolios of work.


Challenges facing data science in 2020 and four ways to address them

Despite the popularity of open-source software in the data science world, 30% of respondents said they aren't doing anything to secure their open-source pipeline. Open-source analytics software is preferred by respondents because they see it as innovating faster and more suitable to their needs, but Anaconda concluded that the security problems may indicate that organizations are slow to adopt open-source tools. "Organizations should take a proactive approach to integrating open-source solutions into the development pipeline, ensuring that data scientists do not have to use their preferred tools outside of the policy boundary," the report recommended. ... Ethics, responsibility, and fairness are all problems that have started to spring up around machine learning and artificial intelligence, and Anaconda said enterprises "should treat ethics, explainability, and fairness as strategic risk vectors and treat them with commensurate attention and care."  Despite the importance of addressing bias inherent in machine learning models and data science, doing so isn't happening: Only 15% of respondents said they had implemented a bias mitigation solution, and only 19% had done so for explainability.


Apple Watch, Fitbit data can spot if you are sick days before symptoms show up

The current study, which is a collaboration between Stanford Medicine, Scripps Research, and Fitbit, will use data gathered from the wearables to create algorithms that can detect the physiological changes in someone that show they're coming down with an infection, potentially before they even know they're sick. Once the signs of infection -- such as an increase in resting heart rate -- have been detected, the user will be alerted through the app that they may be getting sick, allowing them to self-isolate earlier and so spread the infection to fewer people. The lab has been investigating the potential of wearable devices to shed light on changes in users' health for some years. Researchers published a study in 2017 that showed devices could pick up changes in physical parameters before the wearer noticed any symptoms.  The algorithm from that research, known as 'change of heart', detected that changes in heart rate could signal an early infection, and the lab is now building on that research for the current pandemic. "We continued to improve the algorithm, then when the COVID-19 outbreak came, as you might imagine, we started scaling at full force," Michael Snyder, professor and chair of genetics at the Stanford School of Medicine, told ZDNet.


9 career pitfalls every software developer should avoid

It seems easy and safe to become an expert in whatever is dominant. But then you’re competing with the whole crowd both when the technology is hot and when the ground suddenly shifts and you need an exit plan. For example, I was a Microsoft and C++ guy when Java hit. I learned Java because everyone wanted me to have a lot more experience with C or C++. Java hadn’t existed long enough to have such requirements. So I learned it and was able to bypass the stringent C and C++ requirements, and instead I got in early on Java. A few years back, it looked like Ruby would be ascendant. At one point, Perl looked like it would reach the same level that Java eventually did. Predicting the future is hard, so hedging your bets is the safest way to ensure relevance. ... “I’m just a developer, I don’t interest myself in the business.” That’s career suicide. You need to know the score. Is your company doing well? What are its main business challenges? What are its most important projects? How does technology or software help achieve them? How does your company fit into its overall industry? If you don’t know the answers to those questions, you’re going to work on irrelevant projects for irrelevant people in irrelevant companies for a relatively irrelevant amount of money.


Top 13 Challenges Faced In Agile Testing By Every Tester

Speaking strictly in business terms, time is money. If you fail to accommodate automation in your testing process, the amount of time to run tests is high, this can be a major cause of challenges in Agile Testing as you’d be spending a lot running these tests. You also have to fix glitches after the release which further takes up a lot of time. Automation for browser testing is done with the help of Selenium framework, in case you’re wondering, What is Selenium, refer to the article linked. ... Most teams emphasize maximizing their velocity with each sprint. For instance, if a team did 60 Story Points the last time. So, this time, they’ll at least try to do 65. However, what if the team could only do 20 Story Points when the sprint was over?  Did you realize what just happened? Instead of making sure that the flow of work happened on the scrum board seamlessly from left to right, all the team members were concentrating on keeping themselves busy. Sometimes, committing too much during sprint planning can cause challenges in Agile Testing. With this approach, team members are rarely prepared in case something unexpected occurs.


Brute-Force Attacks Targeting RDP on the Rise

Since the start of the COVID-19 pandemic, the number of brute-force attacks targeting remote desktop protocol connections used with Windows devices has steadily increased, spiking to 100,000 incidents per day in April and May, according to an analysis by security firm ESET. By waging brute-force attacks against RDP connections, attackers can gain access to an IT network, enabling them to install backdoors, launch ransomware attacks and plant cryptominers, according to ESET's analysis. RDP is a proprietary Microsoft communications protocol that allows system administrators and employees to connect to corporate networks from remote computers. With the COVID-19 pandemic forcing employees all over the world to work at home, many organizations have increased their use of RDP but have overlooked security concerns. "Despite the increasing importance of RDP, organizations often neglect its settings and protection. When employees use easy-to-guess passwords, and with no additional layers of authentication or protection, there is little that can stop cybercriminals from compromising an organization's systems," Ondrej Kubovič, a security analyst with ESET, notes in the report.


3 CIOs talk driving IT strategy during COVID-19 pandemic

As a result of the COVID-19 pandemic, many IT leaders faced the challenge of having to transition organizations to a work-from-home environment. In less than 10 days, the technology and operations team at Travelers Insurance managed to get the company that has offices all over the world to being fully online, with almost 100% of employees working remotely, said Lefebvre. In addition to having access to digital capabilities, the IT department's ability to respond quickly and effectively was in large part a result of building and engineering a culture with deep expertise, according to Lefebvre. Tariq echoed this during the online panel, describing the importance of having a culture that allows for a more innovative mindset as part of the IT strategy. "Like any organization, we focus on results, [but] we also equally focus on creating a healthy and inclusive culture -- a culture where every team member feels that they have a voice, they are heard, where they can be themselves and be their best," he said. "[It's] a culture that is focused on continuous improvement and value generation. When you do that, magic happens."


Smart cities will track our every move. We will need to keep them in check

For James, it is key to make sure that citizens trust the organizations that own the data about them. "We need to know how this data is governed, who owns it, and who has access to the platform that does it," he says. "Otherwise, there is a risk that you won't bring citizens along with you." James points to the smart city initiative led by an Alphabet-owned urban design business in Toronto. The project was recently axed due to the economic uncertainty caused by the pandemic, but was already running into a series of problems because of backlash from privacy concerned leaders who were worried about surveillance. Ensuring public trust, therefore, is critical; and especially because the cost of abandoning smart city technology, in the context of COVID-19, will be far greater than in normal times. "You have to think about what happens, in the long term, if you don't implement these processes," says James. Smart sensors and IoT devices won't only be used by city planners to monitor the immediate impact of measures linked to the pandemic. In the next few months, they will also be key to the recovery of local businesses, as policy makers start identifying where residents work, shop, eat out or go for drinks.


For data scientists, drudgery is still job #1

Despite all the advances in recent years in data science work environments, data drudgery remains a major part of the data scientist’s workday. According to self-reported estimates by the respondents, data loading and cleaning took up 19% and 26% of their time, respectively—almost half of the total. Model selection, training/scoring, and deployment took up about 34% total (around 11% for each of those tasks individually). When it came to moving data science work into production, the biggest overall obstacle—for data scientists, developers, and sysadmins alike—was meeting IT security standards for their organization. At least some of that is in line with the difficulty of deploying any new app at scale, but the lifecycles for machine learning and data science apps pose their own challenges, like keeping multiple open source application stacks patched against vulnerabilities. Another issue cited by the respondents was the gap between skills taught in institutions and the skills needed in enterprise settings. Most universities offer classes in statistics, machine learning theory, and Python programming, and most students load up on such courses.


Deep learning's role in the evolution of machine learning

"There are many problems that we didn't think of as prediction problems that people have reformulated as prediction problems -- language, vision, etc. -- and many of the gains in those tasks have been possible because of this reformulation," said Nicholas Mattei, assistant professor of computer science at Tulane University and vice chair of the Association for Computing Machinery's special interest group on AI. In language processing, for example, a lot of the focus has moved toward predicting what comes next in the text. In computer vision as well, many problems have been reformulated so that, instead of trying to understand geometry, the algorithms are predicting labels of different parts of an image. The power of big data and deep learning is changing how models are built. Human analysis and insights are being replaced by raw compute power. "Now, it seems that a lot of the time we have substituted big databases, lots of GPUs, and lots and lots of machine time to replace the deep problem introspection needed to craft features for more classic machine learning methods, such as SVM [support vector machine] and Bayes," Mattei said, referring to the Bayesian networks used for modeling the probabilities between observations and outcomes.



Quote for the day:

"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell