Daily Tech Digest - June 07, 2020

Data is Risky Business: The Data Crisis Unmasked

Allegations have emerged of different jurisdictions (such as the State of Georgia in the US) adjusting graphs to create a visual of a downward trend by putting time series data out of sequence. Florida removed the data scientist who was running their COVID-19 data reporting from her role, despite the reporting having been praised for its transparency. In addition, we have the ethical issues of data-driven responses to managing the pandemic, from contact tracing applications to thermal scanning. Deploying technologies such as these requires a balancing of privacy and public interest, but also requiring objective rigour in assessing whether the technology will actually work for the purpose for which they are proposed. For example, thermal cameras sound like great idea. Anyone registering a temperature over 38 degrees Celsius can be denied entry to the building to keep everyone safe. Only, what do you do about false positives and negatives with those technologies? Are there other things that might cause someone to run a high skin temperature (for that is what they measure) from time to time? Any hormonal conditions? Do any of your staff cycle or run to the office? Is there anything that could be done by a malicious actor (or an overly diligent staff member) that could fake out the scanner by suppressing their temperature, like taking paracetamol?


Fighting Defect Clusters in Software Testing

Using metrics like defect density charts or module-wise defect counts, we can examine the history of defects that have been found and look for areas, modules or features with higher defect density. This is where we should begin our search for defect clusters. Spending more time testing these areas may lead us to more defects or more complex use cases to try out. ... Defect clustering follows the Pareto rule that 80% of the defects are caused by 20% of the modules in the software. It’s imperative for a tester to know which 20% of modules have the most defects so that the maximum amount of effort can be spent there. That way, even if you don’t have a lot of time to test, hopefully you can still find the majority of defects.  Once you know the defect cluster areas, testers can focus on containing the defects in their product in a number of ways.By knowing which features or modules contain most defects, testers can spend more effort in finding better ways to test it. They can include more unit tests and integration tests for that module. Testers can also write more in-depth test scenarios with use cases from the customers on how the feature is best used in production. Focusing on test data and creating more exhaustive combinatorial tests for variables can also lead to finding more computational or algorithmic defects sooner.


How to Design For Panic Resilience in Rust

It is always better to exit with an error code than to panic. In the best situation, no software you write will ever panic. A panic is a controlled crash, and must be avoided to build reliable software. A crash is not ever “appropriate” behavior, but it’s better than allowing your system to cause physical damage. If at any time it may be believed that the software could cause something deadly, expensive, or destructive to happen, it’s probably best to shut it down. If you consider your software a car driving at about 60 miles per hour, a panic is like hitting a brick wall. A panic unwinds the call stack, hopping out of every function call and returning from program execution, destroying objects as it goes. It is not considered a safe nor clean shutdown. Avoid panics. The best way to end a program’s execution is to allow it to run until the last closing brace. Somehow, some way, program for that behavior. It allows all objects to safely destroy themselves. See the Drop trait. ... Creating custom error types is valuable. When you use a bare enum as an error type, the data footprint can be tiny.


Good Business Processes Are Key to Resilience During Disruption

With current and future operational challenges bogging down company leadership, empowering all employees with better process management skills and resources ensures your whole organization, from frontline employees to the C-suite, has better visibility and control over responsibilities. Process mapping and oversight needs to be a priority for all businesses today, and preparing teams with tools and resources must come from the top down. Companies must act with a renewed sense of urgency to develop dynamic processes and create stable growth environments during uncertain times. To begin sharing operational knowledge and ownership across your organization, leaders must first identify, define and map key processes across their organization. Companies are no longer in a position to avoid process understanding. Work-related implications of COVID-19 emphasize our need for a renewed focus on process, as inefficient communications and workflows can make or break delivery of your product or service. The teams that will be most successful in the next year are prepared to execute on innovative ideas and adapt willingly.


Shift Your Cybersecurity Mindset to Maintain Cyber Resilience

As more companies expand their remote workforce, the number of endpoints with access to corporate resources is proliferating. Hackers are seizing the opportunities this presents: Phishing email click rates have risen from around 5 percent to over 40 percent in recent months, according to Forbes. With a strong cybersecurity mindset and some strategic planning, your company can position itself to survive these new working conditions and build up even more cyber resilience as you adapt. Because cybersecurity professionals are facing formidable adversaries, understanding how hackers think can go a long way in mitigating the threat they pose. Security expert Frank Abagnale is one of the foremost experts on the thought processes of threat actors, and he was kind enough to lend his expertise to this piece. Since the number of successful phishing attacks has skyrocketed, I asked him if this is more a function of hackers stepping up their game, or employees not possessing the right cybersecurity mindset to pay attention. “It’s both,” he explained. “Any crisis is a perfect backdrop to phishing attacks.


Software Testing Survival Tips

Software test automation is the practice of automating test procedures of the application under test (AUT). There are two main testing aspects: functional and performance. Functional testing is oriented to validating AUT errors triggered by its functionality -- primarily front-end (UI and API). As an example, clicking the login button in the AUT should navigate to the welcome page. Performance testing involves validating overall performance and scalability of the system under load (SUL) by measuring end-to-end transaction time for data passed from the SUL front end to the back end and the sustainability of the SUL per number of users. Going back to our original example, a login transaction can take one second for 1,000 users. ... Many companies have been doing TA for the past few years, but with fewer TA engineers, they can't catch up with the regression testing schedule. If you are in this same boat, you need to look into your TA architecture. The key of high-ROI TA is the minimization of artifacts (i.e., the number of TA scripts, the size of TA scripts, the lines of code (steps), cross-platform portability, and the shareability of TA foundation components such as data sources, functional libraries and test objects.



Suspected Hacker Faces Money Laundering, Conspiracy Charges

The conspiracy charge carries a possible five-year federal prison sentence and a $250,000 fine, and the money laundering charge carries a possible 20-year sentence and a $500,000 fine, according to the Justice Department. Over the course of several years, the FBI alleges, Antonenko and two unnamed co-conspirators targeted vulnerable computer networks in order to steal credit and debit card numbers, expiration dates and other information, according to the federal indictment unsealed this week. In addition, the three allegedly stole personally identifiable information from victims, federal prosecutors say. After selling this data, Antonenko and the other two co-conspirators used bitcoin, as well as banks, to allegedly launder money and hide the proceeds, according to the indictment. Edward V. Sapone, a New York-based attorney representing Antonenko, told Information Security Media Group: "While a colossal amount of information has been released, size doesn't matter. Often times, big criminal cases turn on one fact. Here, the facts have not been tested, as Mr. Antonenko hasn't even been arraigned on an indictment.



Understanding cyber threats to APIs

API access is often loosely-controlled, which can lead to undesired exposure. Ensuring that the correct set of users has appropriate access permissions for each API is a critical security requirement that must be coordinated with enterprise identity and access management (IAM) systems. In some environments, as much as 90% of the respective application traffic (e.g., account login or registration, shopping cart checkout) is generated by automated bots. Understanding and managing traffic profiles, including differentiating good bots from bad ones, is necessary to prevent automated attacks without blocking legitimate traffic. Effective complementary measures include implementing whitelist, blacklist, and rate-limiting policies, as well as geo-fencing specific to use-cases and corresponding API endpoints. APIs simplify attack processes by eliminating the web form or the mobile app, thus allowing a bad actor to more easily exploit a targeted vulnerability. Protecting API endpoints from business logic abuse and other vulnerability exploits is thus a key API security mitigation requirement. Preventing data loss over exposed APIs for appropriately privileged users or otherwise, either due to programming errors or security control gaps, is also a critical security requirement.


A CEO's View Of Data Quality And Governance

To thrive, you must be intentional with your data strategy, by both knowing and trusting the data upon which your organization relies. Knowing your data implies a governance program. In specific terms, it involves harvesting and managing metadata, or "data about the data." Metadata management is a key tenet of a data governance program. Without it, you can only rely on tribal knowledge within your organization. Yet ever-growing volumes of data have rendered that not only impractical, but nearly impossible. Trusting your data implies validating and monitoring the quality and state at the point in time and place where you apply it in your business processes. Most likely your competitors are racing to achieve data management superiority ahead of you. Doing so is vital to not only mitigate regulatory compliance risk, but also to engage customers and stakeholders, optimize the outcome of key business initiatives, such as shortening the time required for new product launch, and apply analytics to inform daily decisions and long-term strategy. Making sure that your data strategy supports your business is a new responsibility for senior leadership.


Predictions for Data Science and AI

When it comes to data science and AI, we’re expecting continued growth of specialization in these roles. Divided into two categories, ‘engineering-heavy’ and ‘science-heavy’ data science roles focus on different aspects of the data science space. ‘Engineering-heavy’ roles include machine learning, AI, and data engineers, focusing on the infrastructure, platforms, and production systems. ‘Science-heavy’ data science roles include analytics consultants, data scientists, and business analytics specialists, focusing on decision and inquiry work. Demands for these roles are increasing rapidly as AI models move to production on a daily. The trend in the specialization will continue to grow over the next few years for sure. Being multi-talented definitely garners a lot of attention, however recruiting those who excel and specialize in one field will help you greatly in the long term (when you’re investing in your startup).
More Tools, More Confusion In this digital age, you can easily find a plethora of tools available to complete any technical task. Although there are several ways of approaching a particular problem, this causes great confusion amongst people new to the industry.



Quote for the day:

"Success seems to be connected to action. Successful people keep moving. They make mistakes, but they don't quit." -- Conrad Hilton

Daily Tech Digest - June 06, 2020

What is pretexting? Definition, examples and prevention

Pretexting is, by and large, illegal in the United States. For financial institutions covered by the Gramm-Leach-Bliley Act of 1999 (GLBA) — which is to say just about all financial institutions — it's illegal for any person to obtain or attempt to obtain, to attempt to disclose or cause to disclose, customer information of a financial institution by false pretenses or deception. GLBA-regulated institutions are also required to put standards in place to educate their own staff to recognize pretexting attempts. One thing the HP scandal revealed, however, was that it wasn't clear if it was illegal to use pretexting to gain non-financial information — remember, HP was going after their directors' phone records, not their money. Prosecutors had to pick and choose among laws to file charges under, some of which weren't tailored with this kind of scenario in mind. In the wake of the scandal, Congress quickly passed the Telephone Records and Privacy Protection Act of 2006, which extended protection to records held by telecom companies. One of the best ways to prevent pretexting is to simply be aware that it's a possibility, and that techniques like email or phone spoofing can make it unclear who's reaching out to contact you.


MIT professor wants to shift power to the people by building local data collectives

The idea is to use distributed systems to give individuals and cities control over their own data. Right now, health insurance companies and hospitals have primary control of an individual's health data, and banks get the most benefit from analyzing customer data. Individuals have access to the information but there's no easy way to put it to good use. If smaller, local organizations--like credit unions--could create a secure platform for people to manage their own data, this would shift decision-making and control to people and communities instead of national corporations. Increasing local control of data would allow leaders and people to figure out solutions that fit the needs of their communities, instead of using a one-size-fits-all approach. Pentland used the example of the Upper Peninsula of Michigan and Boston. He grew up in a rural community but now lives in an international, urban, tech-centric city "The rules here are totally different, and what works for the Upper Peninsula does not work here," he said. "The idea is to handle local conditions locally and coordinate globally so cities can learn from each other but be responsible for themselves."


5 ways to drive agile experimentation using feature flags

Branching controls code deployment and can regulate whether a feature gets deployed. But this is only a gross, binary control that can turn on and off the feature’s availability. Using only branching to control feature deployments limits a team’s ability to control when code gets deployed compared to when product leaders enable it for end-users. There are times product owners and development teams should deploy features and control access to them at runtime. For example, it’s useful to experiment and test features with specific customer segments or with a fraction of the user base. Feature flagging is a capability and set of tools that enable developers to wrap features with control flags. Once developers deploy the feature’s code, the flags enable them to toggle, test, and gradually roll out the feature with tools to control whether and how it appears to end-users. Feature flagging enables progressive delivery by turning on a feature slowly and in a controlled way. It also drives experimentation. Features can be tested with end-users to validate impact and experience. Jon Noronha, VP Product at Optimizely, says, “Development teams must move fast without breaking things.


Reinforcement Learning: The Next Big Thing For AI

“Reinforcement learning entails an agent, action and reward,” said Ankur Taly, who is the head of data science at Fiddler. “The agent, such as a robot or character, interacts with its surrounding environment and observes a specific activity, responding accordingly to produce a beneficial or desired result. Reinforcement learning adheres to a specific methodology and determines the best means to obtain the best result. It’s very similar to the structure of how we play a video game, in which the agent engages in a series of trials to obtain the highest score or reward. Over several iterations, it learns to maximize its cumulative reward.” In fact, some of the most interesting use cases for reinforcement learning have been with complex games. Consider the case of DeepMind’s AlphaGo. The system used reinforcement learning to quickly understand how to play Go and was able to beat the world champion, Lee Sedol, in 2016 (the game has more potential moves than the number of atoms in the universe!) But there have certainly been other applications of the technology that go beyond gaming. To this end, reinforcement learning has been particularly useful with robotics.


Driving Better Analytics Through Robust Data Strategies

The idea behind developing a data strategy is to make sure all data resources are positioned in such a way that they can be used, shared, and moved easily and efficiently. Data is no longer a byproduct of business processing – it’s a critical asset that enables processing and decision making. A data strategy helps by ensuring that data is managed and used as an asset. It provides a common set of goals and objectives across projects to ensure data is used both effectively and efficiently. A data strategy establishes common methods, practices, and processes to manage, manipulate, and share data across the enterprise in a repeatable manner. While most companies have multiple data management initiatives underways (metadata, master data management, data governance, data migration, modernization, data integration, data quality, etc.), most efforts are focused on point solutions that address specific project or organizational needs. A data strategy establishes a road map for aligning these activities across each data management discipline in such a way that they complement and build on one another to deliver greater benefits.


Understandability: The Most Important Metric You’re Not Tracking

Most commercial software engineering tasks out there do not start out with a clean slate. There is an existing application, written using a certain computer language(s), relying on a set of frameworks and libraries, and running on top of some operating system(s). We take it upon ourselves (or our teams) to change that existing application so that it meets some requirement(s), such as developing a new feature, fixing an existing bug, etc. Simultaneously we are required to continue meeting all the existing (un)documented requirements, and maintain the existing behavior as much as possible. And, as every junior software engineer finds out on their first day on the job, writing a piece of code to solve a simple computer science problem (or copying the answer from StackOverflow) is nowhere near the level of complexity of solving that same problem within a large and intricate system. Borrowing from the financial industry, let’s define Understandability: “Understandability is the concept that a system should be presented so that an engineer can easily comprehend it.” The more understandable a system is, the easier it will be for engineers to change it in a predictable and safe manner.


COVID-19 puts WLAN engineers in the spotlight

For Dickey and other WLAN professionals, the pandemic has demonstrated the critical importance of wireless communications. Nearly two-thirds of American workers – double the number from early March – are doing their jobs via home wireless, according to a Gallup Poll survey. Cisco, in its latest earnings report, announced that 95% of its employees are working from home. That means WLAN pros have had to shift their attention from maintaining corporate networks to remotely assisting workers, many of whom are non-technical, in getting their home networks up to speed and securely connected to corporate assets. Tam Dell'Oro, founder and CEO of the Dell'Oro Group, surveyed about 20 enterprise network managers and WLAN distributors, and reports that new in-building deployments have pretty much stopped cold. She adds that with WLAN pros charged with setting up and securing at-home workers, "remote access devices, particularly those with higher WAN connectivity and higher security, are flying off the shelf." IDC analyst Brandon Butler says the 2020 forecast for the WLAN industry has been downgraded from the 5.1% growth rate predicted prior to the pandemic to a 2.3% decline.


Explore API documentation basics and best practices

Good API documentation does not happen by accident. It takes clear guidelines, a consistent team effort, stringent peer review and a commitment to maintain documentation throughout an API's lifecycle. Some top API documentation best practices you should implement include: Include all necessary components. A complete documentation package usually has sections on authentication, error messages, resource usage and terms of acceptable use policies, and a comprehensive change log. Some API documentation also includes a series of guides that provides detailed examples of API use and use cases. Know the intended audience. Tailor API documentation for the intended audience. If the documentation is intended for novice developers, focus on things like tutorials, samples and guides. If the documentation is intended for experienced developers, build up reference material detailing syntax, parameters, arguments and response details. Consider how to include vested non-developers, such as project managers or even CTOs, in the API documentation.


Equality of Actors in Enterprise Architecture

With ‘Information Technology’ we normally designate our modern digital equipment. However, for millennia humanity has used information technologies to record and transmit information. To underline the significance of information technology, the difference between prehistory and history lies in the use of information technology — the ‘history era’ is synonymous with the ‘information age’. Floridi argues that recently we have entered the era of hyperhistory with the invention of the computer. The difference between hyperhistory and history is that in history ITs are only recording and transmitting information, in hyperhistory computers have the capability to process information. As a basic function, computers are able to store information, and this already makes a big difference with the labor intensive recording of data until the sixties or seventies. Moreover, based on this information the computer can process this information and make computations that beforehand were the prerogative of humans. As a side remark, the term ‘computer’ until the nineteenth century was synonymous with a person that ‘performs calculations’, not a machine.


Data Scientist vs Data Analyst. Here’s the Difference.

data science and machine learning fields share confusion between their job descriptions, employers, and the general public, the difference between data science and data analytics is more separable. However, there are still similarities along with the key differences between the two fields and job positions. Some would say to be a data scientist, a data analytics role is a prerequisite to becoming hired as a data scientist. This article aims to shed light on what it means to be a data scientist and data analyst, from a professional in both fields. While I was studying to become a data scientist, as a working data analyst, I realized that data science theory is vastly different from that of data analytics. That is not to say that data science does not share the same tools and programming languages as data analytics. One could also argue that data science is a form of data analytics because ultimately, you are working with data — transforming, visualizing, and coming to a conclusion for actionable change. So if they are so similar or one is under the other, why write an article on these two popular fields? The reason is that people who are coming into either field can learn from here



Quote for the day:

“It’s only after you’ve stepped outside your comfort zone that you begin to change, grow, and transform.” -- Roy T. Bennett

Daily Tech Digest - June 05, 2020

New report underscores the high costs and challenges of virtual workforces

DaaS was introduced as a cloud-based solution of legacy tech to reshape the virtual desktop landscape. DaaS appears to be marred by the same underlying hidden costs and similar management challenges. "DaaS is essentially just VDI that has been crudely shoved into the cloud. DaaS sought to 'disrupt' VDI by moving certain aspects of virtual desktop delivery to the cloud, but it still relies on the same costly and complex underlying infrastructure," Henshaw said.  Although these offerings are intended to streamline operations and cut costs, some organizations quickly find the opposite to be true. These unforeseen challenges and unanticipated costs are often burdensome for organizations. More than half of current DaaS and VDI users surveyed note that the adoption of these systems required a minimum of 10 full-time employees dedicated to the management of these systems.... VDI systems require purchasing lots of hardware upfront and the costs continue to mount over the years due to maintenance and upkeep. Comparatively, virtual application delivery platforms allow companies to save up to 75% of costs related to upfront infrastructure and recurrent licensing fees, per the report. 


Fix spaghetti code and other pasta-theory antipatterns

Carelessness isn't the only way to end up with hard-to-maintain code; over-complication can be a cause too. In 2013, I took a job at a technology company that had one of the most confusing codebases I have ever seen. Every element of the product was abstracted into dozens of singular, nested components. It was nearly impossible to make a change in one layer of the stack without affecting every other layer. The codebase wasn't a mess, but it wasn't maintainable either. If spaghetti code suffers from architectural sloppiness, lasagna code is a characteristic of over-engineering at its most extreme. Programmers who work with object-oriented codebases often fall into this trap. Lasagna code is abstracted, tightly connected layers; developers write it as they are convinced every subcomponent necessitates its own object. These programmers tend to focus on the layout of code at the expense of its maintainability. What starts out as a highly organized codebase quickly becomes an over-architected disaster. Here's a good rule to follow: Be conservative with your abstractions.


What’s the difference between supervised and unsupervised?

Supervised machine learning applies to situations where you know the outcome of your input data. Say you want to create an image classification machine learning algorithm that can detect images of cats, dogs, and horses. To train the AI model, you must gather a large dataset of cat, dog, and horse photos. But before feeding them to the machine learning algorithm, you must annotate them with the name of their respective classes. Annotation might include putting the images of each class in a separate folder, using a file-naming convention, or appending meta-data to the image file. This is the laborious manual task that is often referred to in stories that mention AI sweatshops. Once the data is labeled, the machine learning algorithm (e.g. a convolutional neural network or a support vector machine) processes the examples and develops a mathematical model that can map each image to its correct class. If the AI model is trained on enough labeled examples, it will be able to accurately detect the class of new images that contain cats, dogs, horses. Supervised machine learning solves two types of problems: classification and regression.


The digital transformation’s next frontier: Open banking

Open banking, as a standard, is a regulatory framework that guides how financial institutions create, share and access consumer financial data. That data is shared freely through a network of financial institutions and financial technology companies. But there are two important traits in open banking that distinguish it from traditional approaches: Consumers consent to the banks to share the consumers’ data publicly; and All financial institutions open their programs and interfaces to third-party developers. These attributes aim to help consumers better understand their financial data so they can make better decisions. Banks, in turn, can provide a more seamless consumer experience through the use of mobile applications. Technology makes this all possible. For consumers, the infrastructure increasingly is there. Smart mobile devices have become smarter and more powerful, and they have continued to shape consumer behavior. More and more, this means that consumers are demanding digital conveniences like paperless use of money and virtual banking.


Clustering Non-Numeric Data Using C#

Clustering data is the process of grouping items so that items in a group (cluster) are similar and items in different groups are dissimilar. After data has been clustered, the results can be visually analyzed by a human to see if any useful patterns emerge. For example, clustered sales data could reveal that certain types of items are often purchased together, and that information could be useful for targeted advertising. Clustered data can also be programmatically analyzed to find anomalous items. For completely numeric data, the k-means clustering algorithm is simple and effective, especially if the k-means++ initialization technique is used. But non-numeric data (also called categorical data) is surprisingly difficult to cluster. ... In order to use the code presented here with your own data you must have intermediate or better programming skill with a C-family language. In order to significantly modify the demo algorithm you must have expert level programming skill. This article doesn't assume you know anything about clustering or category utility. The demo is coded in C# but you shouldn't have too much trouble refactoring the code to another language if you wish.


Installing OpenCV and ImageAI for Object Detection

In this series, we’ll learn how to use Python, OpenCV (an open source computer vision library), and ImageAI (a deep learning library for vision) to train AI to detect whether workers are wearing hardhats. In the process, we’ll create an end-to-end solution you can use in real life—this isn’t just an academic exercise! This is an important use case because many companies must ensure workers have the proper safety equipment. But what we’ll learn is useful beyond just detecting hardhats. By the end of the series, you’ll be able to use AI to detect nearly any kind of object in an image or video stream. ... OpenCV is an open-source computer vision library with C++, Python, Java, and MATLAB interfaces. ImageAI is a machine learning library that simplifies AI training and object detection in images. These two libraries make it extremely easy to solve a number of object detection problems in images and videos. We’re going to dive straight into our solution by setting these libraries up using Python in a Jupyter Notebook (on Windows).


How Covid-19 has changed the role of the CTO

While the workforce is remote, its individuals are more likely to work at all hours of the day from several locations. Accordingly, CTOs must structure their team as if all company departments are now operating 24/7. This can be a drastic change for some operational leaders, who must consider whether outsourcing partial or total systems coverage to a third-party specialist provider would be a more efficient way to accommodate this demand, while preserving internal company resources. Once the pandemic has subsided, the new CTO must act as a major decision-maker at the company and co-lead the process of returning employees to the office environment, ensuring that the transition back is as seamless as possible. This may include enforcing pre-return system checks for each office location, ensuring that all devices are updated with their latest software patches, creating a specialised portal for employees to flag technical issues and designing programs to help map and/or reconcile company data stored across multiple company and personal devices. The onus is on CTOs to optimise the tools and channels through which company data is shared.


What matters most in an Agile organizational structure

Agile software developers aim to better meet customer needs. To do so, they need to prioritize, release and adapt software products more easily. Unlike the Spotify-inspired tribe structure, Agile teams should remain located closely to the operations teams that will ultimately support and scale their work, according to the authors. This model, they argue in Doing Agile Right, promotes accountability for change, and willingness to innovate on the business side. Any Agile initiative should follow the sequence of "test, learn, and scale." People at the top levels must accept new ideas, which will drive others to accept them as well. Then, innovation comes from the opposite direction. "Agile works best when decisions are pushed down the organization as far as possible, so long as people have appropriate guidelines and expectations about when to escalate a decision to a higher level." A successful Agile transformation might require some pilot teams and perhaps several years to scale across the business, as was the case with Bosch Power Tools, profiled as an example in Doing Agile Right. 


Infosec 2020: Covid-19 an opportunity to change security thinking

“As much as this situation presents a challenge to security behaviours and culture, we need to think about how to engage people on a personal level,” she said. Barker advised that security teams try to engage their users about security at home and how they’re using digital platforms to connect to family and friends, shopping, gaming, and using social media. “There are lots of avenues to talk to people about security, and that mindset expands to think more about the business as well,” she said. Vincent Blake, vice-president and IT security officer at publisher Pearson, agreed that Covid-19 might create an opportunity for security by enabling security teams to turn the conversation away from restrictive “thou shalt not” policy and towards gentle advice on how to protect yourself online. Mark Osborne, Europe, Middle East and Africa (EMEA) CISO at real estate firm JLL, said that security teams were getting a lot of kudos thanks to how they have stepped up and kept teams running during a critical black swan event, and said they could ride this wave of positivity to improve relationships with the business afterwards.


Running Axon Server - CQRS and Event Sourcing in Java

Typical CQRS applications have components exchanging commands and events, with persistence of the aggregates handled via explicit commands, and query models optimized for their usage and built from the events that report on the aggregate’s state. The aggregate persistence layer in this setup can be built on RDBMS storage layers or NoSQL document components, and standard JPA/JDBC based repositories are included in the framework core. The same holds for storage of the query models. The communication for exchanging the messages can be solved with most standard messaging components, but the usage patterns do favour specific implementations for the different scenarios. We can use pretty much any modern messaging solution for the publish-subscribe pattern, as long as we can ensure no messages are lost, because we want the query models to faithfully represent the aggregate’s state. For commands we need to extend the basic one-way messaging into a request-reply pattern, if only to ensure we can detect the unavailability of command handlers. Other replies might be the resulting state of the aggregate, or a detailed validation failure report if the update was vetoed.



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith

Daily Tech Digest - June 04, 2020

Machine learning will transform the banking sector

The unified approach that AI provides allows institutions to see the financial status of a customer across multiple accounts in an instant. This information combines with the previous products that got contracted, transaction histories, and individual interactions to ensure that personalized services are always available. This technology redefines the definition of customization. Instead of requiring a customer to complete a series of questionnaires or surveys to match up specific datapoints to potential products of interest, machine learning automates this process by looking at all of the activities of the consumer throughout that person’s history with the organization. It can even pull information from the news or social media posts to determine the viability of an offer before one gets requested. That makes the predictive mechanisms more accurate, speeding up the time it takes for someone to complete the processes needed to access something new. As improvements to data warehousing and information processing continue developing, the existing machine learning models will use specific financial profiles that the technology develops internally to create unique initiatives that encourage increases in customer interactions.


Survey: Security Concerns Slow Down IoT Deployments

Although security requirements may mean IoT projects will take longer, it also means that enterprises are cognizant of how a growing number of devices on networks introduces new attack vectors. "The risks with IoT potentially increase because of the diversity of deployments and technologies in IoT networks - general enterprise security is much more standardized and so easier to deploy and keep updated," says Alexandra Rehak, chief analyst with the London-based consultancy Omdia and head of its internet of things practice. The Omdia-Syniverse IoT Enterprise Survey polled 200 enterprises between January and March in North America and Europe that have deployed IoT devices. It was commissioned by Syniverse, which offers private networks for fleets of IoT devices. Of all enterprises polled, 86% reported that IoT projects were delayed or constrained by security. The survey covered companies in healthcare, financial services, manufacturing, retail and hospitality and transportation. Their concerns over security vary. For example, the manufacturing industry is most worried about unauthorized devices joining the network. Healthcare and finance rank regulatory and compliance concerns high.


Layoffs rock the IT services industry amid move to the cloud

So what are we to make of all this? The obvious culprit is the move to the cloud. Enterprises don’t need to hire as many consultants making six-figure salaries when AWS or Microsoft is handling half of your IT load. But DXC said it wasn’t the cloud, it was its own bloated hierarchy, which is quite an admission. “There is no doubt that these big consulting firms are having to pivot because of the cloud and its hitting them in the bottom line,” says Joshua Greenbaum, president of Enterprise Application Consulting, an independent consultancy. “The quarantine and emergency has accelerated plans to move to the cloud and put the brakes on projects that would have been lucrative. The combination of the two means a lot of stuff is put on hold.” Whether those jobs come back is questionable. Not helping matters is machine-learning software like the recently announced SAP Cloud ALM that automates a lot of the basic work of cloud lifecycle management. There’s no doubt the basic, low-level services of getting data centers up and running stuff doesn’t come back. There will be a shift. At the end of the day, complexity is king in enterprise software. What is made easy today is made more complex tomorrow and that will need more skills.


What are you tuning for?

Are you tuning to improve your SQL Server licensing footprint? You can tune for CPU reduction in repetitive queries. You can index and statistics tune to make certain queries faster and be more efficient, which in turn reduces the CPU, memory, and storage thrash while the commands are executing. You might even be able to use less of the things that SQL Server licensing is based on, namely CPUs. Are you tuning for end-user productivity? Can you quantify the pain points the users are ‘feeling’ each and every day? Can you pinpoint the database commands that are underneath those application features? Are they database-driven, or is it more application data handling that is slowing down the function? Maybe the volume of data that the business accesses daily is so high that all-flash storage is the biggest gain you can make. What if faster CPUs, and not just more cores, would get your users a larger bang for the buck? Are your repetitive queries optimal? Can you even access the commands to tune them, such as queries underneath third-party applications? What if tuning for end-user productivity meant increasing parallelism or adding to your licensing footprint? 


Constructing the future for engineering – finding the right model where one size does not fit all

If anything goes wrong or needs adjustment, there is no “back to the drawing board” any more. In fact, not for a long while. It’s all about accessing the right type of data at the right stage in the process, meaning that all of these stages have to be completely interlinked. Indeed, their success depends on constant collaboration and communication between the various people engaged in carrying out their individual activities, who may be located virtually anywhere in the world. Many of the modern world’s most famous engineering projects could only have been realised by bringing together talent from around the globe with a multitude of different departments and workflows in one extended, virtual team. And it’s not just engineering and design these days – collaboration has to extend to marketing and sales so that marketable and sellable concepts are what is ultimately built and put on sale. Crucially, this information also has to extend in a business-relatable form to boardrooms. Photorealistic rendering of finished products are not just pretty pictures – they are pretty essential. This has been the fundamental model of engineering for the past two decades.


Digital banking is now for everyone, how will you choose to compete?

There have been attempts for traditional banks to break free from their analogue worlds and colonise new digital planets. In 2019, the USA JPMorgan Chase’s neobank Finn, and in 2020, the UK Royal Bank of Scotland’s neobank Bó both failed to establish themselves, despite massive investment.3 Internal politics and competing technical platforms have been cited as potential root causes but these challenges are insignificant compared to the lack of any desirable, differentiated value proposition for customers. Without one, existing customers of the parent bank had no reason to try them, and were most likely internally discouraged to avoid cannibalisation. Potential new customers in underserved segments had no reason to select them. On the other hand, the established neos had unique propositions developed in collaboration with their customers, building trust and engagement, and scaling growth organically. Without a deliberate strategy around differentiation, it is not only traditional banks who will continue to fail at digital, but so will the explosion of independent neos and fintechs. When unattainable feature parity with competitors drives product roadmaps and turns product teams into ‘feature factories’, customers fail to see any 10X factor needed to tip them into using something new.


Tech Disruption In Retail Banking: Australia's Big Banks Hold Their Ground As Tech Takes Center Stage

Implementing technology is a key hurdle for Australia's major banks as they rely on legacy IT systems for their core operations. On the positive side, the underlying technology (such as fiber networks, the New Payments Platform, and 5G) required for innovation is already available in Australia, similar to countries where it is also widely implemented such as Sweden and China. Smaller regional and mutual banks face similar challenges, although the path will likely be easier for mutual banks that use cheaper off-the-shelf IT products and have generally stayed more up to date with core banking system upgrades than their major bank peers. Australia's network infrastructure is comprehensive and sufficient to meet the data needs of imminent technological developments; over 99% of Australia's population has mobile broadband access, including in remote areas. We believe cloud migration and adopting a microservices software architecture style will be key to banks' future operating performance in all banking systems, including Australia. Cloud-based systems significantly improve system stability and lower infrastructure costs. Flexible system architecture increases the rate at which banks can update their systems to meet changing consumer needs, while also facilitating connectivity between banks and fintechs through easier application program interface (API) integration.



Predicting the Future with Forecasting and Agile Metrics

There are three important factors that have a much higher impact on lead time than the story size, and that when left unmanaged make our teams unpredictable. First, do we have a high amount of work in progress (WIP)? When we work on too many things at the same time we are not able to focus on finishing the tasks that are already in progress. We waste time in context switching, the quality of our work decreases, and even stories that appear to be simple end up taking longer than expected. Second, how long does work spend in queues between activities? Very often in our processes there is some waiting time between one activity and another (for example, waiting for a developer to be free to start a story, waiting for the next release, etc). These queues are often invisible, they’re not represented on our boards, and it’s really common to ignore them when we estimate, as we only tend to consider the active time that we’re going to be working on something. When these queues are not managed they lead to a lot of work in progress put on hold, which in turn leads to high unpredictability.


Researchers Disclose 2 Critical Vulnerabilities in SAP ASE

The former vulnerability refers to the database software failing to perform the necessary validation checks for an authenticated user while executing "dump" or "load" commands that can be exploited by a malicious actor to allow arbitrary code execution or code Injection, according to the National Vulnerability Database description. "On the next backup server restart, the corruption of configuration file will be detected by the server and it will replace the configuration with the default one. And the default configuration allows anyone to connect to the backup server using the sa login and an empty password," Rakhmanov says. "The problem is that the password to log into the helper database is in a configuration file that is readable by everyone on Windows." CVE-2020-6252 affects only the Windows version of SAP ASE 16 with Cockpit. The problem here is the password to log into the helper database is in a configuration file that is readable by everyone on Windows. This means any valid Windows user can take the file and then recover the password. Then, they are able to log into the SQL Anywhere database as the special user "utility_db" and begin to issue commands and possibly execute code with local system privileges, Rakhmanov writes.


Serverless in the Enterprise: Building Stateful Applications

Cloud native applications allow enterprises to design, build, deploy and manage monolithic applications in more agile, nimble ways. These applications accelerate business value while driving greater operational efficiencies and cost savings through containers, a pay-as-you-go model, and a distributed runtime. However, current serverless implementations (namely, Function-as-a-Service, or FaaS for short) are unable to fully manage business logic and state in a distributed cloud native solution, which creates inefficiencies in hyperscale applications. What is required is a “stateful” approach to serverless application design. ... Unfortunately, a lot of enterprise use cases need to be stateful — such as long-running workflows, human approved processes, and e-commerce shopping cart applications. Workflows, in general, require some sort of state associated with them. Pure serverless functions can’t provide that, since they exist for short durations. Obtaining the application state is most commonly solved by either frequenting database access or saving the state at the client. But both are bad ideas from a security perspective, as well as from the perspective of scaling the database instances.



Quote for the day:

"If you can't embrace, absorb, and integrate new tools quickly, the industry will evolve and pass you by." - Brian Dawson

Daily Tech Digest - June 03, 2020

Top network skills to succeed in a post-coronavirus world

"Cloud security is a huge topic moving forward," says James Stanger, chief technology evangelist at CompTIA, a trade association for the global IT industry. "To cut costs and increase resiliency and be more flexible, folks are moving to the cloud. We're also seeing companies that aren't used to the cloud be increasingly surprised at the lack of control" and loss of asset visibility the cloud can bring. Companies are looking for individuals who know how to create cost-effective but capable alternative business platforms, Stanger says, in case a company's primary systems become unavailable or impacted by a stay-at-home order or other event. The rise of the remote worker has also led to greater demand for people with the skills to resolve network access issues and optimize network connections. "If you have remote workers, you need to make sure they have good bandwidth," Stanger says. "If you are moving to the cloud, you need good QoS [quality of service] and bandwidth control." For that matter, any skills that support the work-from-home model will be in demand, says Jim Johnson, senior vice president for staffing firm Robert Half Technology.


Ultimate guide to artificial intelligence in the enterprise

One of the biggest risks to the effective use of AI in the enterprise is worker mistrust. Many employees fear and distrust AI or remain unconvinced of its value in the workplace. Anxieties about job elimination are not unfounded, according to many studies. A report from the Brookings Institute, "Automation and Artificial Intelligence: How Machines Are Affecting People and Places," estimated that some 36 million jobs "face high exposure to automation" in the next decade. The jobs most vulnerable to elimination are in office administration, production, transportation and food preparation, but the study found that by 2030, virtually every occupation will be affected to some degree by AI-enabled automation. Of more immediate concern is the prevailing skepticism about AI's value in the workplace: 42% of IT and business executives surveyed do not "fully understand the AI benefits and use in the workplace," according to Gartner's 2019 CIO Agenda survey. Fear of the unknown accounts for some of this skepticism, the report stated, adding that business and IT leaders must take on the challenge of quantifying the benefits of AI to employees. 


Canadian major telcos effectively lock Huawei out of 5G build

Canadian carriers Bell and Telus announced on Tuesday that each of them would not be continuing the use of Huawei equipment in their respective 5G networks, having signed deals with the Chinese giant's rivals instead. For Bell, it announced Ericsson would be supplying its radio access network. It added that it was looking to launch 5G services as the Canadian economy exited lockdown. Bell, which in Febraury announced it had signed an agreement with Nokia, said it was maintaining the use of multiple vendors in its upcoming network, as it had for 4G. "Ericsson plays an important role in enabling Bell's award-winning LTE network and we're pleased to grow our partnership into 5G mobile and fixed wireless technology," said Bell chief technology officer Stephen Howe. Meanwhile, the British Columbia-based Telus also chose to go with a combination of Ericsson and Nokia. The company said it had spent CA$200 billion on its network since the turn of the century, and would part with a further CA$40 billion over the next three years to deploy its 5G network. Both Bell and Telus had previously used Huawei equipment in their networks. 


Cloud Based Development - From Dream to Reality

Thanks to the internet, software as a service solutions quickly brought a significant shift to a software team's daily routine. What used to be done manually and offline can now be performed more efficiently online, with real-time collaboration and quick, in-the-moment feedback loops. Nowadays, it is common for the requirements, design, test and maintenance stages of the SDLC to be performed in the cloud. Whether a business migrates to the cloud or is born in the cloud, a so-called cloud-native business, the trend is crystal clear: The cloud is here to stay and the SDLC too adopts this innovation. Except for the Implementation stage... Have you ever looked at a pull request, said to yourself "This looks good" and left a LGTM comment, without actually testing the code? Right, you have - we all have. Gitpod comes with a built-in code review feature that lets you review changes and leave comments inline. For even better productivity, you can configure Gitpod to add a PR comment with a link to a workspace that contains this exact pull request's code changes. Your workflow as a reviewer now is: Open PR; click link; review & test code.


Capital One ordered to disclose third-party analysis of 2019 breach

Capital One's security flaw was rooted in a misconfigured web application firewall, similar to the flaw compromised in Equifax's 2017 breach. The WAF misconfiguration led to criticism around the company's reliance on Amazon Web Services' security. ​The bank hired Mandiant in 2015 to perform "engagement activities, results and recommendations for remediation"​ in the event of a cyber incident, according to the court document. The bank updated their agreement in January 2019 to 285 hours of service. Capital One extended its services "out of the retainer already provided to Mandiant under the Jan. 7, 2019, [statement of work]," according to the court document. But when the retainer was "exhausted," Capital One paid Mandiant using its cyber organization's funds. By December, the bank's legal department took on Mandiant's payments, redesignating the service's costs as legal fees. While Capital One said Mandiant's report was confidential, the bank said it disclosed it to about 50 Capital One employees, four regulators and the accounting firm Ernst & Young. The bank does not state why, for business or legal purposes.


Internet pioneer Leonard Kleinrock on the great experiment we’re living through

Some mix of work at home will undoubtedly remain. And some jobs that used to be necessary are now being seen to be not necessary. Businesses are saying, "Gee, I didn't need that functionality anyway. We can get it either by AI or by some other automated way". And entertainment, you know, do I really want to go to the movie theater? Well… versus Netflix or whatever? We're never going to get back to where we were.  In engineering, there's a term called hysteresis. We're in such a situation now, where we've stretched the system in one direction. If we relax now, it's not going to come back to where it was. It's going to have memory of what went on. It certainly applies to medicine, to social interactions, etc. So, I find that very exciting. It is exploring things we couldn't have, and we're finding advantages of those things. Economically, it's a very serious issue here, what's happened and how we come back. Supply chains are being broken. How they get restructured is not clear. There are opportunities out there now for new products and services based on the fact that we are less in physical contact, more remote.


How to balance trust and technology in banking

The implementation of AI and machine learning to analyse and use data has helped financial services companies both internally – the ability to monitor account activity, complete multiple tasks at greater speed, and more effectively, combat fraudulent activities, and so on – and externally; and data is proving to be the framework for the provision of greater user experience and the managing of trust and relationships. A common perspective in this forward-looking narrative is that banks – incumbents or ‘traditional’ in particular – face a significant challenge when it comes to developing and implementing such technologies compared to those more innovative fintech market entrants or the tech giants. However, in a report published last year exploring what the next decade holds for incumbents in the age of digital banking, HSBC suggested that this is a “common myth”, highlighting the growing landscape for collaboration between banks and fintechs and suggesting that “we are already in an era of innovative cross business collaboration which many would have not imagined a few years ago”.


How are FinTech innovation and AI disrupting traditional banking models in the ME?

The surge in demand for online banking services during the pandemic has spiked the need for fintech firms to incorporate fresh, innovative technology to meet the changing needs of customers. To meet this demand, the key sources of fintech innovation in the coming months and years is likely to be blockchain, open banking, cloud-based systems and, most importantly, AI. With increased government support in the form of stimulus packages due to COVID-19 and start-up funding, alongside customer demand, these technological innovations are set to disrupt traditional banking models - completely transforming the way we manage our finances both during and after the pandemic. At the heart of fintech innovation lie consumers. The increasingly tech-savvy, digitally minded population in the gulf region has pushed fintech firms to provide consumers with a personalised and seamless online banking experience. To achieve this, fintech firms have focused on implementing technological innovations that promise faster, cheaper, customer-centric banking services.


14 tips for CIOs managing shadow IT activities

Considering how complex IT has become, particularly in the age of the internet, the ability to know about and effectively manage IT resources -- both internal and external -- has become increasingly important. Here, we examine situations to be aware of regarding shadow IT and offer guidance to ensure that CIOs can identify and mitigate rogue activities. The primary goal for most CIOs is a smooth-running IT organization that is compliant, secure and risk-free. On the issue of security, they pay attention to any situation that threatens the confidentiality, integrity and availability of information. Non-approved installation of systems, whether on site or via cloud technology, presents possible unauthorized access to internal systems. From a risk management perspective, shadow IT presents unique challenges to CIOs and their cybersecurity and operations teams and should be a key element in those activities. The growth of cloud-based systems using software as a service (SaaS), infrastructure as a service (IaaS) and platform as a service (PaaS) represents significant opportunities for shadow IT activities. 


AI adoption – Don’t leave data governance behind

Data is the driver for AI and digital transformation. Yet time and time again, we see instances where it is not leveraged in a way that reflects its value. Of course, it is never as easy as we want – data governance and conditioning take time and resources. However, it must be viewed in terms of the benefits it will bring: observability, reproducibility, efficiency and transparency. With AI now very much a part of business function and only set to increase in reach and take-up, the enterprise must react accordingly. In understanding the main challenges and obstacles to AI adoption, companies can proactively look to tackle them. Moreover, for companies yet to begin their AI journey, prior knowledge of challenges will allow them to prepare and plan. Addressing company culture and practices early on makes a big difference down the line. Many have had to learn the hard way, so businesses should take heed when they can. It is essential that data governance procedures are given the careful consideration they require and that – as much as possible – companies avoid viewing them as an addendum tacked on to digital transformation plans.



Quote for the day:

"Do not follow where the path may lead. Go instead where there is no path and leave a trail." -- Muriel Strode