Daily Tech Digest - June 06, 2020

What is pretexting? Definition, examples and prevention

Pretexting is, by and large, illegal in the United States. For financial institutions covered by the Gramm-Leach-Bliley Act of 1999 (GLBA) — which is to say just about all financial institutions — it's illegal for any person to obtain or attempt to obtain, to attempt to disclose or cause to disclose, customer information of a financial institution by false pretenses or deception. GLBA-regulated institutions are also required to put standards in place to educate their own staff to recognize pretexting attempts. One thing the HP scandal revealed, however, was that it wasn't clear if it was illegal to use pretexting to gain non-financial information — remember, HP was going after their directors' phone records, not their money. Prosecutors had to pick and choose among laws to file charges under, some of which weren't tailored with this kind of scenario in mind. In the wake of the scandal, Congress quickly passed the Telephone Records and Privacy Protection Act of 2006, which extended protection to records held by telecom companies. One of the best ways to prevent pretexting is to simply be aware that it's a possibility, and that techniques like email or phone spoofing can make it unclear who's reaching out to contact you.


MIT professor wants to shift power to the people by building local data collectives

The idea is to use distributed systems to give individuals and cities control over their own data. Right now, health insurance companies and hospitals have primary control of an individual's health data, and banks get the most benefit from analyzing customer data. Individuals have access to the information but there's no easy way to put it to good use. If smaller, local organizations--like credit unions--could create a secure platform for people to manage their own data, this would shift decision-making and control to people and communities instead of national corporations. Increasing local control of data would allow leaders and people to figure out solutions that fit the needs of their communities, instead of using a one-size-fits-all approach. Pentland used the example of the Upper Peninsula of Michigan and Boston. He grew up in a rural community but now lives in an international, urban, tech-centric city "The rules here are totally different, and what works for the Upper Peninsula does not work here," he said. "The idea is to handle local conditions locally and coordinate globally so cities can learn from each other but be responsible for themselves."


5 ways to drive agile experimentation using feature flags

Branching controls code deployment and can regulate whether a feature gets deployed. But this is only a gross, binary control that can turn on and off the feature’s availability. Using only branching to control feature deployments limits a team’s ability to control when code gets deployed compared to when product leaders enable it for end-users. There are times product owners and development teams should deploy features and control access to them at runtime. For example, it’s useful to experiment and test features with specific customer segments or with a fraction of the user base. Feature flagging is a capability and set of tools that enable developers to wrap features with control flags. Once developers deploy the feature’s code, the flags enable them to toggle, test, and gradually roll out the feature with tools to control whether and how it appears to end-users. Feature flagging enables progressive delivery by turning on a feature slowly and in a controlled way. It also drives experimentation. Features can be tested with end-users to validate impact and experience. Jon Noronha, VP Product at Optimizely, says, “Development teams must move fast without breaking things.


Reinforcement Learning: The Next Big Thing For AI

“Reinforcement learning entails an agent, action and reward,” said Ankur Taly, who is the head of data science at Fiddler. “The agent, such as a robot or character, interacts with its surrounding environment and observes a specific activity, responding accordingly to produce a beneficial or desired result. Reinforcement learning adheres to a specific methodology and determines the best means to obtain the best result. It’s very similar to the structure of how we play a video game, in which the agent engages in a series of trials to obtain the highest score or reward. Over several iterations, it learns to maximize its cumulative reward.” In fact, some of the most interesting use cases for reinforcement learning have been with complex games. Consider the case of DeepMind’s AlphaGo. The system used reinforcement learning to quickly understand how to play Go and was able to beat the world champion, Lee Sedol, in 2016 (the game has more potential moves than the number of atoms in the universe!) But there have certainly been other applications of the technology that go beyond gaming. To this end, reinforcement learning has been particularly useful with robotics.


Driving Better Analytics Through Robust Data Strategies

The idea behind developing a data strategy is to make sure all data resources are positioned in such a way that they can be used, shared, and moved easily and efficiently. Data is no longer a byproduct of business processing – it’s a critical asset that enables processing and decision making. A data strategy helps by ensuring that data is managed and used as an asset. It provides a common set of goals and objectives across projects to ensure data is used both effectively and efficiently. A data strategy establishes common methods, practices, and processes to manage, manipulate, and share data across the enterprise in a repeatable manner. While most companies have multiple data management initiatives underways (metadata, master data management, data governance, data migration, modernization, data integration, data quality, etc.), most efforts are focused on point solutions that address specific project or organizational needs. A data strategy establishes a road map for aligning these activities across each data management discipline in such a way that they complement and build on one another to deliver greater benefits.


Understandability: The Most Important Metric You’re Not Tracking

Most commercial software engineering tasks out there do not start out with a clean slate. There is an existing application, written using a certain computer language(s), relying on a set of frameworks and libraries, and running on top of some operating system(s). We take it upon ourselves (or our teams) to change that existing application so that it meets some requirement(s), such as developing a new feature, fixing an existing bug, etc. Simultaneously we are required to continue meeting all the existing (un)documented requirements, and maintain the existing behavior as much as possible. And, as every junior software engineer finds out on their first day on the job, writing a piece of code to solve a simple computer science problem (or copying the answer from StackOverflow) is nowhere near the level of complexity of solving that same problem within a large and intricate system. Borrowing from the financial industry, let’s define Understandability: “Understandability is the concept that a system should be presented so that an engineer can easily comprehend it.” The more understandable a system is, the easier it will be for engineers to change it in a predictable and safe manner.


COVID-19 puts WLAN engineers in the spotlight

For Dickey and other WLAN professionals, the pandemic has demonstrated the critical importance of wireless communications. Nearly two-thirds of American workers – double the number from early March – are doing their jobs via home wireless, according to a Gallup Poll survey. Cisco, in its latest earnings report, announced that 95% of its employees are working from home. That means WLAN pros have had to shift their attention from maintaining corporate networks to remotely assisting workers, many of whom are non-technical, in getting their home networks up to speed and securely connected to corporate assets. Tam Dell'Oro, founder and CEO of the Dell'Oro Group, surveyed about 20 enterprise network managers and WLAN distributors, and reports that new in-building deployments have pretty much stopped cold. She adds that with WLAN pros charged with setting up and securing at-home workers, "remote access devices, particularly those with higher WAN connectivity and higher security, are flying off the shelf." IDC analyst Brandon Butler says the 2020 forecast for the WLAN industry has been downgraded from the 5.1% growth rate predicted prior to the pandemic to a 2.3% decline.


Explore API documentation basics and best practices

Good API documentation does not happen by accident. It takes clear guidelines, a consistent team effort, stringent peer review and a commitment to maintain documentation throughout an API's lifecycle. Some top API documentation best practices you should implement include: Include all necessary components. A complete documentation package usually has sections on authentication, error messages, resource usage and terms of acceptable use policies, and a comprehensive change log. Some API documentation also includes a series of guides that provides detailed examples of API use and use cases. Know the intended audience. Tailor API documentation for the intended audience. If the documentation is intended for novice developers, focus on things like tutorials, samples and guides. If the documentation is intended for experienced developers, build up reference material detailing syntax, parameters, arguments and response details. Consider how to include vested non-developers, such as project managers or even CTOs, in the API documentation.


Equality of Actors in Enterprise Architecture

With ‘Information Technology’ we normally designate our modern digital equipment. However, for millennia humanity has used information technologies to record and transmit information. To underline the significance of information technology, the difference between prehistory and history lies in the use of information technology — the ‘history era’ is synonymous with the ‘information age’. Floridi argues that recently we have entered the era of hyperhistory with the invention of the computer. The difference between hyperhistory and history is that in history ITs are only recording and transmitting information, in hyperhistory computers have the capability to process information. As a basic function, computers are able to store information, and this already makes a big difference with the labor intensive recording of data until the sixties or seventies. Moreover, based on this information the computer can process this information and make computations that beforehand were the prerogative of humans. As a side remark, the term ‘computer’ until the nineteenth century was synonymous with a person that ‘performs calculations’, not a machine.


Data Scientist vs Data Analyst. Here’s the Difference.

data science and machine learning fields share confusion between their job descriptions, employers, and the general public, the difference between data science and data analytics is more separable. However, there are still similarities along with the key differences between the two fields and job positions. Some would say to be a data scientist, a data analytics role is a prerequisite to becoming hired as a data scientist. This article aims to shed light on what it means to be a data scientist and data analyst, from a professional in both fields. While I was studying to become a data scientist, as a working data analyst, I realized that data science theory is vastly different from that of data analytics. That is not to say that data science does not share the same tools and programming languages as data analytics. One could also argue that data science is a form of data analytics because ultimately, you are working with data — transforming, visualizing, and coming to a conclusion for actionable change. So if they are so similar or one is under the other, why write an article on these two popular fields? The reason is that people who are coming into either field can learn from here



Quote for the day:

“It’s only after you’ve stepped outside your comfort zone that you begin to change, grow, and transform.” -- Roy T. Bennett

Daily Tech Digest - June 05, 2020

New report underscores the high costs and challenges of virtual workforces

DaaS was introduced as a cloud-based solution of legacy tech to reshape the virtual desktop landscape. DaaS appears to be marred by the same underlying hidden costs and similar management challenges. "DaaS is essentially just VDI that has been crudely shoved into the cloud. DaaS sought to 'disrupt' VDI by moving certain aspects of virtual desktop delivery to the cloud, but it still relies on the same costly and complex underlying infrastructure," Henshaw said.  Although these offerings are intended to streamline operations and cut costs, some organizations quickly find the opposite to be true. These unforeseen challenges and unanticipated costs are often burdensome for organizations. More than half of current DaaS and VDI users surveyed note that the adoption of these systems required a minimum of 10 full-time employees dedicated to the management of these systems.... VDI systems require purchasing lots of hardware upfront and the costs continue to mount over the years due to maintenance and upkeep. Comparatively, virtual application delivery platforms allow companies to save up to 75% of costs related to upfront infrastructure and recurrent licensing fees, per the report. 


Fix spaghetti code and other pasta-theory antipatterns

Carelessness isn't the only way to end up with hard-to-maintain code; over-complication can be a cause too. In 2013, I took a job at a technology company that had one of the most confusing codebases I have ever seen. Every element of the product was abstracted into dozens of singular, nested components. It was nearly impossible to make a change in one layer of the stack without affecting every other layer. The codebase wasn't a mess, but it wasn't maintainable either. If spaghetti code suffers from architectural sloppiness, lasagna code is a characteristic of over-engineering at its most extreme. Programmers who work with object-oriented codebases often fall into this trap. Lasagna code is abstracted, tightly connected layers; developers write it as they are convinced every subcomponent necessitates its own object. These programmers tend to focus on the layout of code at the expense of its maintainability. What starts out as a highly organized codebase quickly becomes an over-architected disaster. Here's a good rule to follow: Be conservative with your abstractions.


What’s the difference between supervised and unsupervised?

Supervised machine learning applies to situations where you know the outcome of your input data. Say you want to create an image classification machine learning algorithm that can detect images of cats, dogs, and horses. To train the AI model, you must gather a large dataset of cat, dog, and horse photos. But before feeding them to the machine learning algorithm, you must annotate them with the name of their respective classes. Annotation might include putting the images of each class in a separate folder, using a file-naming convention, or appending meta-data to the image file. This is the laborious manual task that is often referred to in stories that mention AI sweatshops. Once the data is labeled, the machine learning algorithm (e.g. a convolutional neural network or a support vector machine) processes the examples and develops a mathematical model that can map each image to its correct class. If the AI model is trained on enough labeled examples, it will be able to accurately detect the class of new images that contain cats, dogs, horses. Supervised machine learning solves two types of problems: classification and regression.


The digital transformation’s next frontier: Open banking

Open banking, as a standard, is a regulatory framework that guides how financial institutions create, share and access consumer financial data. That data is shared freely through a network of financial institutions and financial technology companies. But there are two important traits in open banking that distinguish it from traditional approaches: Consumers consent to the banks to share the consumers’ data publicly; and All financial institutions open their programs and interfaces to third-party developers. These attributes aim to help consumers better understand their financial data so they can make better decisions. Banks, in turn, can provide a more seamless consumer experience through the use of mobile applications. Technology makes this all possible. For consumers, the infrastructure increasingly is there. Smart mobile devices have become smarter and more powerful, and they have continued to shape consumer behavior. More and more, this means that consumers are demanding digital conveniences like paperless use of money and virtual banking.


Clustering Non-Numeric Data Using C#

Clustering data is the process of grouping items so that items in a group (cluster) are similar and items in different groups are dissimilar. After data has been clustered, the results can be visually analyzed by a human to see if any useful patterns emerge. For example, clustered sales data could reveal that certain types of items are often purchased together, and that information could be useful for targeted advertising. Clustered data can also be programmatically analyzed to find anomalous items. For completely numeric data, the k-means clustering algorithm is simple and effective, especially if the k-means++ initialization technique is used. But non-numeric data (also called categorical data) is surprisingly difficult to cluster. ... In order to use the code presented here with your own data you must have intermediate or better programming skill with a C-family language. In order to significantly modify the demo algorithm you must have expert level programming skill. This article doesn't assume you know anything about clustering or category utility. The demo is coded in C# but you shouldn't have too much trouble refactoring the code to another language if you wish.


Installing OpenCV and ImageAI for Object Detection

In this series, we’ll learn how to use Python, OpenCV (an open source computer vision library), and ImageAI (a deep learning library for vision) to train AI to detect whether workers are wearing hardhats. In the process, we’ll create an end-to-end solution you can use in real life—this isn’t just an academic exercise! This is an important use case because many companies must ensure workers have the proper safety equipment. But what we’ll learn is useful beyond just detecting hardhats. By the end of the series, you’ll be able to use AI to detect nearly any kind of object in an image or video stream. ... OpenCV is an open-source computer vision library with C++, Python, Java, and MATLAB interfaces. ImageAI is a machine learning library that simplifies AI training and object detection in images. These two libraries make it extremely easy to solve a number of object detection problems in images and videos. We’re going to dive straight into our solution by setting these libraries up using Python in a Jupyter Notebook (on Windows).


How Covid-19 has changed the role of the CTO

While the workforce is remote, its individuals are more likely to work at all hours of the day from several locations. Accordingly, CTOs must structure their team as if all company departments are now operating 24/7. This can be a drastic change for some operational leaders, who must consider whether outsourcing partial or total systems coverage to a third-party specialist provider would be a more efficient way to accommodate this demand, while preserving internal company resources. Once the pandemic has subsided, the new CTO must act as a major decision-maker at the company and co-lead the process of returning employees to the office environment, ensuring that the transition back is as seamless as possible. This may include enforcing pre-return system checks for each office location, ensuring that all devices are updated with their latest software patches, creating a specialised portal for employees to flag technical issues and designing programs to help map and/or reconcile company data stored across multiple company and personal devices. The onus is on CTOs to optimise the tools and channels through which company data is shared.


What matters most in an Agile organizational structure

Agile software developers aim to better meet customer needs. To do so, they need to prioritize, release and adapt software products more easily. Unlike the Spotify-inspired tribe structure, Agile teams should remain located closely to the operations teams that will ultimately support and scale their work, according to the authors. This model, they argue in Doing Agile Right, promotes accountability for change, and willingness to innovate on the business side. Any Agile initiative should follow the sequence of "test, learn, and scale." People at the top levels must accept new ideas, which will drive others to accept them as well. Then, innovation comes from the opposite direction. "Agile works best when decisions are pushed down the organization as far as possible, so long as people have appropriate guidelines and expectations about when to escalate a decision to a higher level." A successful Agile transformation might require some pilot teams and perhaps several years to scale across the business, as was the case with Bosch Power Tools, profiled as an example in Doing Agile Right. 


Infosec 2020: Covid-19 an opportunity to change security thinking

“As much as this situation presents a challenge to security behaviours and culture, we need to think about how to engage people on a personal level,” she said. Barker advised that security teams try to engage their users about security at home and how they’re using digital platforms to connect to family and friends, shopping, gaming, and using social media. “There are lots of avenues to talk to people about security, and that mindset expands to think more about the business as well,” she said. Vincent Blake, vice-president and IT security officer at publisher Pearson, agreed that Covid-19 might create an opportunity for security by enabling security teams to turn the conversation away from restrictive “thou shalt not” policy and towards gentle advice on how to protect yourself online. Mark Osborne, Europe, Middle East and Africa (EMEA) CISO at real estate firm JLL, said that security teams were getting a lot of kudos thanks to how they have stepped up and kept teams running during a critical black swan event, and said they could ride this wave of positivity to improve relationships with the business afterwards.


Running Axon Server - CQRS and Event Sourcing in Java

Typical CQRS applications have components exchanging commands and events, with persistence of the aggregates handled via explicit commands, and query models optimized for their usage and built from the events that report on the aggregate’s state. The aggregate persistence layer in this setup can be built on RDBMS storage layers or NoSQL document components, and standard JPA/JDBC based repositories are included in the framework core. The same holds for storage of the query models. The communication for exchanging the messages can be solved with most standard messaging components, but the usage patterns do favour specific implementations for the different scenarios. We can use pretty much any modern messaging solution for the publish-subscribe pattern, as long as we can ensure no messages are lost, because we want the query models to faithfully represent the aggregate’s state. For commands we need to extend the basic one-way messaging into a request-reply pattern, if only to ensure we can detect the unavailability of command handlers. Other replies might be the resulting state of the aggregate, or a detailed validation failure report if the update was vetoed.



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith

Daily Tech Digest - June 04, 2020

Machine learning will transform the banking sector

The unified approach that AI provides allows institutions to see the financial status of a customer across multiple accounts in an instant. This information combines with the previous products that got contracted, transaction histories, and individual interactions to ensure that personalized services are always available. This technology redefines the definition of customization. Instead of requiring a customer to complete a series of questionnaires or surveys to match up specific datapoints to potential products of interest, machine learning automates this process by looking at all of the activities of the consumer throughout that person’s history with the organization. It can even pull information from the news or social media posts to determine the viability of an offer before one gets requested. That makes the predictive mechanisms more accurate, speeding up the time it takes for someone to complete the processes needed to access something new. As improvements to data warehousing and information processing continue developing, the existing machine learning models will use specific financial profiles that the technology develops internally to create unique initiatives that encourage increases in customer interactions.


Survey: Security Concerns Slow Down IoT Deployments

Although security requirements may mean IoT projects will take longer, it also means that enterprises are cognizant of how a growing number of devices on networks introduces new attack vectors. "The risks with IoT potentially increase because of the diversity of deployments and technologies in IoT networks - general enterprise security is much more standardized and so easier to deploy and keep updated," says Alexandra Rehak, chief analyst with the London-based consultancy Omdia and head of its internet of things practice. The Omdia-Syniverse IoT Enterprise Survey polled 200 enterprises between January and March in North America and Europe that have deployed IoT devices. It was commissioned by Syniverse, which offers private networks for fleets of IoT devices. Of all enterprises polled, 86% reported that IoT projects were delayed or constrained by security. The survey covered companies in healthcare, financial services, manufacturing, retail and hospitality and transportation. Their concerns over security vary. For example, the manufacturing industry is most worried about unauthorized devices joining the network. Healthcare and finance rank regulatory and compliance concerns high.


Layoffs rock the IT services industry amid move to the cloud

So what are we to make of all this? The obvious culprit is the move to the cloud. Enterprises don’t need to hire as many consultants making six-figure salaries when AWS or Microsoft is handling half of your IT load. But DXC said it wasn’t the cloud, it was its own bloated hierarchy, which is quite an admission. “There is no doubt that these big consulting firms are having to pivot because of the cloud and its hitting them in the bottom line,” says Joshua Greenbaum, president of Enterprise Application Consulting, an independent consultancy. “The quarantine and emergency has accelerated plans to move to the cloud and put the brakes on projects that would have been lucrative. The combination of the two means a lot of stuff is put on hold.” Whether those jobs come back is questionable. Not helping matters is machine-learning software like the recently announced SAP Cloud ALM that automates a lot of the basic work of cloud lifecycle management. There’s no doubt the basic, low-level services of getting data centers up and running stuff doesn’t come back. There will be a shift. At the end of the day, complexity is king in enterprise software. What is made easy today is made more complex tomorrow and that will need more skills.


What are you tuning for?

Are you tuning to improve your SQL Server licensing footprint? You can tune for CPU reduction in repetitive queries. You can index and statistics tune to make certain queries faster and be more efficient, which in turn reduces the CPU, memory, and storage thrash while the commands are executing. You might even be able to use less of the things that SQL Server licensing is based on, namely CPUs. Are you tuning for end-user productivity? Can you quantify the pain points the users are ‘feeling’ each and every day? Can you pinpoint the database commands that are underneath those application features? Are they database-driven, or is it more application data handling that is slowing down the function? Maybe the volume of data that the business accesses daily is so high that all-flash storage is the biggest gain you can make. What if faster CPUs, and not just more cores, would get your users a larger bang for the buck? Are your repetitive queries optimal? Can you even access the commands to tune them, such as queries underneath third-party applications? What if tuning for end-user productivity meant increasing parallelism or adding to your licensing footprint? 


Constructing the future for engineering – finding the right model where one size does not fit all

If anything goes wrong or needs adjustment, there is no “back to the drawing board” any more. In fact, not for a long while. It’s all about accessing the right type of data at the right stage in the process, meaning that all of these stages have to be completely interlinked. Indeed, their success depends on constant collaboration and communication between the various people engaged in carrying out their individual activities, who may be located virtually anywhere in the world. Many of the modern world’s most famous engineering projects could only have been realised by bringing together talent from around the globe with a multitude of different departments and workflows in one extended, virtual team. And it’s not just engineering and design these days – collaboration has to extend to marketing and sales so that marketable and sellable concepts are what is ultimately built and put on sale. Crucially, this information also has to extend in a business-relatable form to boardrooms. Photorealistic rendering of finished products are not just pretty pictures – they are pretty essential. This has been the fundamental model of engineering for the past two decades.


Digital banking is now for everyone, how will you choose to compete?

There have been attempts for traditional banks to break free from their analogue worlds and colonise new digital planets. In 2019, the USA JPMorgan Chase’s neobank Finn, and in 2020, the UK Royal Bank of Scotland’s neobank Bó both failed to establish themselves, despite massive investment.3 Internal politics and competing technical platforms have been cited as potential root causes but these challenges are insignificant compared to the lack of any desirable, differentiated value proposition for customers. Without one, existing customers of the parent bank had no reason to try them, and were most likely internally discouraged to avoid cannibalisation. Potential new customers in underserved segments had no reason to select them. On the other hand, the established neos had unique propositions developed in collaboration with their customers, building trust and engagement, and scaling growth organically. Without a deliberate strategy around differentiation, it is not only traditional banks who will continue to fail at digital, but so will the explosion of independent neos and fintechs. When unattainable feature parity with competitors drives product roadmaps and turns product teams into ‘feature factories’, customers fail to see any 10X factor needed to tip them into using something new.


Tech Disruption In Retail Banking: Australia's Big Banks Hold Their Ground As Tech Takes Center Stage

Implementing technology is a key hurdle for Australia's major banks as they rely on legacy IT systems for their core operations. On the positive side, the underlying technology (such as fiber networks, the New Payments Platform, and 5G) required for innovation is already available in Australia, similar to countries where it is also widely implemented such as Sweden and China. Smaller regional and mutual banks face similar challenges, although the path will likely be easier for mutual banks that use cheaper off-the-shelf IT products and have generally stayed more up to date with core banking system upgrades than their major bank peers. Australia's network infrastructure is comprehensive and sufficient to meet the data needs of imminent technological developments; over 99% of Australia's population has mobile broadband access, including in remote areas. We believe cloud migration and adopting a microservices software architecture style will be key to banks' future operating performance in all banking systems, including Australia. Cloud-based systems significantly improve system stability and lower infrastructure costs. Flexible system architecture increases the rate at which banks can update their systems to meet changing consumer needs, while also facilitating connectivity between banks and fintechs through easier application program interface (API) integration.



Predicting the Future with Forecasting and Agile Metrics

There are three important factors that have a much higher impact on lead time than the story size, and that when left unmanaged make our teams unpredictable. First, do we have a high amount of work in progress (WIP)? When we work on too many things at the same time we are not able to focus on finishing the tasks that are already in progress. We waste time in context switching, the quality of our work decreases, and even stories that appear to be simple end up taking longer than expected. Second, how long does work spend in queues between activities? Very often in our processes there is some waiting time between one activity and another (for example, waiting for a developer to be free to start a story, waiting for the next release, etc). These queues are often invisible, they’re not represented on our boards, and it’s really common to ignore them when we estimate, as we only tend to consider the active time that we’re going to be working on something. When these queues are not managed they lead to a lot of work in progress put on hold, which in turn leads to high unpredictability.


Researchers Disclose 2 Critical Vulnerabilities in SAP ASE

The former vulnerability refers to the database software failing to perform the necessary validation checks for an authenticated user while executing "dump" or "load" commands that can be exploited by a malicious actor to allow arbitrary code execution or code Injection, according to the National Vulnerability Database description. "On the next backup server restart, the corruption of configuration file will be detected by the server and it will replace the configuration with the default one. And the default configuration allows anyone to connect to the backup server using the sa login and an empty password," Rakhmanov says. "The problem is that the password to log into the helper database is in a configuration file that is readable by everyone on Windows." CVE-2020-6252 affects only the Windows version of SAP ASE 16 with Cockpit. The problem here is the password to log into the helper database is in a configuration file that is readable by everyone on Windows. This means any valid Windows user can take the file and then recover the password. Then, they are able to log into the SQL Anywhere database as the special user "utility_db" and begin to issue commands and possibly execute code with local system privileges, Rakhmanov writes.


Serverless in the Enterprise: Building Stateful Applications

Cloud native applications allow enterprises to design, build, deploy and manage monolithic applications in more agile, nimble ways. These applications accelerate business value while driving greater operational efficiencies and cost savings through containers, a pay-as-you-go model, and a distributed runtime. However, current serverless implementations (namely, Function-as-a-Service, or FaaS for short) are unable to fully manage business logic and state in a distributed cloud native solution, which creates inefficiencies in hyperscale applications. What is required is a “stateful” approach to serverless application design. ... Unfortunately, a lot of enterprise use cases need to be stateful — such as long-running workflows, human approved processes, and e-commerce shopping cart applications. Workflows, in general, require some sort of state associated with them. Pure serverless functions can’t provide that, since they exist for short durations. Obtaining the application state is most commonly solved by either frequenting database access or saving the state at the client. But both are bad ideas from a security perspective, as well as from the perspective of scaling the database instances.



Quote for the day:

"If you can't embrace, absorb, and integrate new tools quickly, the industry will evolve and pass you by." - Brian Dawson

Daily Tech Digest - June 03, 2020

Top network skills to succeed in a post-coronavirus world

"Cloud security is a huge topic moving forward," says James Stanger, chief technology evangelist at CompTIA, a trade association for the global IT industry. "To cut costs and increase resiliency and be more flexible, folks are moving to the cloud. We're also seeing companies that aren't used to the cloud be increasingly surprised at the lack of control" and loss of asset visibility the cloud can bring. Companies are looking for individuals who know how to create cost-effective but capable alternative business platforms, Stanger says, in case a company's primary systems become unavailable or impacted by a stay-at-home order or other event. The rise of the remote worker has also led to greater demand for people with the skills to resolve network access issues and optimize network connections. "If you have remote workers, you need to make sure they have good bandwidth," Stanger says. "If you are moving to the cloud, you need good QoS [quality of service] and bandwidth control." For that matter, any skills that support the work-from-home model will be in demand, says Jim Johnson, senior vice president for staffing firm Robert Half Technology.


Ultimate guide to artificial intelligence in the enterprise

One of the biggest risks to the effective use of AI in the enterprise is worker mistrust. Many employees fear and distrust AI or remain unconvinced of its value in the workplace. Anxieties about job elimination are not unfounded, according to many studies. A report from the Brookings Institute, "Automation and Artificial Intelligence: How Machines Are Affecting People and Places," estimated that some 36 million jobs "face high exposure to automation" in the next decade. The jobs most vulnerable to elimination are in office administration, production, transportation and food preparation, but the study found that by 2030, virtually every occupation will be affected to some degree by AI-enabled automation. Of more immediate concern is the prevailing skepticism about AI's value in the workplace: 42% of IT and business executives surveyed do not "fully understand the AI benefits and use in the workplace," according to Gartner's 2019 CIO Agenda survey. Fear of the unknown accounts for some of this skepticism, the report stated, adding that business and IT leaders must take on the challenge of quantifying the benefits of AI to employees. 


Canadian major telcos effectively lock Huawei out of 5G build

Canadian carriers Bell and Telus announced on Tuesday that each of them would not be continuing the use of Huawei equipment in their respective 5G networks, having signed deals with the Chinese giant's rivals instead. For Bell, it announced Ericsson would be supplying its radio access network. It added that it was looking to launch 5G services as the Canadian economy exited lockdown. Bell, which in Febraury announced it had signed an agreement with Nokia, said it was maintaining the use of multiple vendors in its upcoming network, as it had for 4G. "Ericsson plays an important role in enabling Bell's award-winning LTE network and we're pleased to grow our partnership into 5G mobile and fixed wireless technology," said Bell chief technology officer Stephen Howe. Meanwhile, the British Columbia-based Telus also chose to go with a combination of Ericsson and Nokia. The company said it had spent CA$200 billion on its network since the turn of the century, and would part with a further CA$40 billion over the next three years to deploy its 5G network. Both Bell and Telus had previously used Huawei equipment in their networks. 


Cloud Based Development - From Dream to Reality

Thanks to the internet, software as a service solutions quickly brought a significant shift to a software team's daily routine. What used to be done manually and offline can now be performed more efficiently online, with real-time collaboration and quick, in-the-moment feedback loops. Nowadays, it is common for the requirements, design, test and maintenance stages of the SDLC to be performed in the cloud. Whether a business migrates to the cloud or is born in the cloud, a so-called cloud-native business, the trend is crystal clear: The cloud is here to stay and the SDLC too adopts this innovation. Except for the Implementation stage... Have you ever looked at a pull request, said to yourself "This looks good" and left a LGTM comment, without actually testing the code? Right, you have - we all have. Gitpod comes with a built-in code review feature that lets you review changes and leave comments inline. For even better productivity, you can configure Gitpod to add a PR comment with a link to a workspace that contains this exact pull request's code changes. Your workflow as a reviewer now is: Open PR; click link; review & test code.


Capital One ordered to disclose third-party analysis of 2019 breach

Capital One's security flaw was rooted in a misconfigured web application firewall, similar to the flaw compromised in Equifax's 2017 breach. The WAF misconfiguration led to criticism around the company's reliance on Amazon Web Services' security. ​The bank hired Mandiant in 2015 to perform "engagement activities, results and recommendations for remediation"​ in the event of a cyber incident, according to the court document. The bank updated their agreement in January 2019 to 285 hours of service. Capital One extended its services "out of the retainer already provided to Mandiant under the Jan. 7, 2019, [statement of work]," according to the court document. But when the retainer was "exhausted," Capital One paid Mandiant using its cyber organization's funds. By December, the bank's legal department took on Mandiant's payments, redesignating the service's costs as legal fees. While Capital One said Mandiant's report was confidential, the bank said it disclosed it to about 50 Capital One employees, four regulators and the accounting firm Ernst & Young. The bank does not state why, for business or legal purposes.


Internet pioneer Leonard Kleinrock on the great experiment we’re living through

Some mix of work at home will undoubtedly remain. And some jobs that used to be necessary are now being seen to be not necessary. Businesses are saying, "Gee, I didn't need that functionality anyway. We can get it either by AI or by some other automated way". And entertainment, you know, do I really want to go to the movie theater? Well… versus Netflix or whatever? We're never going to get back to where we were.  In engineering, there's a term called hysteresis. We're in such a situation now, where we've stretched the system in one direction. If we relax now, it's not going to come back to where it was. It's going to have memory of what went on. It certainly applies to medicine, to social interactions, etc. So, I find that very exciting. It is exploring things we couldn't have, and we're finding advantages of those things. Economically, it's a very serious issue here, what's happened and how we come back. Supply chains are being broken. How they get restructured is not clear. There are opportunities out there now for new products and services based on the fact that we are less in physical contact, more remote.


How to balance trust and technology in banking

The implementation of AI and machine learning to analyse and use data has helped financial services companies both internally – the ability to monitor account activity, complete multiple tasks at greater speed, and more effectively, combat fraudulent activities, and so on – and externally; and data is proving to be the framework for the provision of greater user experience and the managing of trust and relationships. A common perspective in this forward-looking narrative is that banks – incumbents or ‘traditional’ in particular – face a significant challenge when it comes to developing and implementing such technologies compared to those more innovative fintech market entrants or the tech giants. However, in a report published last year exploring what the next decade holds for incumbents in the age of digital banking, HSBC suggested that this is a “common myth”, highlighting the growing landscape for collaboration between banks and fintechs and suggesting that “we are already in an era of innovative cross business collaboration which many would have not imagined a few years ago”.


How are FinTech innovation and AI disrupting traditional banking models in the ME?

The surge in demand for online banking services during the pandemic has spiked the need for fintech firms to incorporate fresh, innovative technology to meet the changing needs of customers. To meet this demand, the key sources of fintech innovation in the coming months and years is likely to be blockchain, open banking, cloud-based systems and, most importantly, AI. With increased government support in the form of stimulus packages due to COVID-19 and start-up funding, alongside customer demand, these technological innovations are set to disrupt traditional banking models - completely transforming the way we manage our finances both during and after the pandemic. At the heart of fintech innovation lie consumers. The increasingly tech-savvy, digitally minded population in the gulf region has pushed fintech firms to provide consumers with a personalised and seamless online banking experience. To achieve this, fintech firms have focused on implementing technological innovations that promise faster, cheaper, customer-centric banking services.


14 tips for CIOs managing shadow IT activities

Considering how complex IT has become, particularly in the age of the internet, the ability to know about and effectively manage IT resources -- both internal and external -- has become increasingly important. Here, we examine situations to be aware of regarding shadow IT and offer guidance to ensure that CIOs can identify and mitigate rogue activities. The primary goal for most CIOs is a smooth-running IT organization that is compliant, secure and risk-free. On the issue of security, they pay attention to any situation that threatens the confidentiality, integrity and availability of information. Non-approved installation of systems, whether on site or via cloud technology, presents possible unauthorized access to internal systems. From a risk management perspective, shadow IT presents unique challenges to CIOs and their cybersecurity and operations teams and should be a key element in those activities. The growth of cloud-based systems using software as a service (SaaS), infrastructure as a service (IaaS) and platform as a service (PaaS) represents significant opportunities for shadow IT activities. 


AI adoption – Don’t leave data governance behind

Data is the driver for AI and digital transformation. Yet time and time again, we see instances where it is not leveraged in a way that reflects its value. Of course, it is never as easy as we want – data governance and conditioning take time and resources. However, it must be viewed in terms of the benefits it will bring: observability, reproducibility, efficiency and transparency. With AI now very much a part of business function and only set to increase in reach and take-up, the enterprise must react accordingly. In understanding the main challenges and obstacles to AI adoption, companies can proactively look to tackle them. Moreover, for companies yet to begin their AI journey, prior knowledge of challenges will allow them to prepare and plan. Addressing company culture and practices early on makes a big difference down the line. Many have had to learn the hard way, so businesses should take heed when they can. It is essential that data governance procedures are given the careful consideration they require and that – as much as possible – companies avoid viewing them as an addendum tacked on to digital transformation plans.



Quote for the day:

"Do not follow where the path may lead. Go instead where there is no path and leave a trail." -- Muriel Strode

Daily Tech Digest - June 02, 2020

Big GDPR Fines in UK and Ireland: What's the Holdup?

"Although the impact of COVID-19 may explain some of the current, continued delay, quite why what may end up being over a year to resolve these matters since the ICO announced its intentions to fine may leave some wondering whether GDPR enforcement is going as quickly as it should," he says. "In addition, what was also expected to be a showcase for the first significant fines under GDPR in the U.K. may now be a letdown." But Brian Honan, who heads Dublin-based cybersecurity consultancy BH Consulting, says that seeing an extended legal process isn't surprising, especially because GDPR enforcement norms have yet to be set. "The regulator, be that the ICO or any other regulator, has to ensure their case is a legally watertight as it can be before issuing a fine or a penalty. This is very important as organizations, particularly large ones with deep legal resources, will no doubt challenge any penalties imposed on them," he says. "The BA and Marriott cases are a prime example of this," says Honan, who's also a cybersecurity adviser to Europol, the EU's law enforcement intelligence agency. "We also have to take into account many of the regulators have limited resources, and their staff have to ensure they support the rights of all data subjects as best they can."


How to set up a chaos engineering game day

It isn't easy to run a chaos engineering game day. Nonetheless, it should be both fun and instructive. Manifold has hosted several styles of chaos engineering game days. Examples include 30-minute tabletop events as well as multi-hour active failure events that involve the full engineering team. A recent offsite Manifold event involved dice rolls, character classes and prizes for surviving the chaos incident. To maintain a chaos engineering program, employees must enjoy the challenge. "Uncontrolled chaos will happen to your system -- save your seriousness for that," said James Bowes, CTO of Manifold. Role-playing game days are a great way to keep it interesting. With each chaos engineering game day, the organization should build up its resistance to digital failure. "As you proceed, and if you are successful, it should become more difficult to find parts of the system to break," Bowes said. Let the participants know that the goal is to find problems; if they break something, consider that a success. But keep other teams and stakeholders informed.


It’s Time to Rethink Leadership Around Leading for Resilience

If you lead with the assumption that something somewhere and at some time will jump out and attack, you naturally prepare to defend yourself. This preparation doesn’t distract you from moving forward, but it does prove critical when you need to protect yourself. If your entire supply chain is dependent upon the ongoing support of unfriendly or at least unaligned actors and subject to pendulum swings in the political environment, you diversify the supply chain risk. By the same token, minimizing business model risk by diversifying channels is essential. Moving forward, expect every restaurant and food-service operator that is interested in surviving and thriving to develop robust online and takeout systems and internal processes. I’ve lost interest or empathy for the old-line retailers of my childhood now teetering on the brink of the abyss. They’ve had more than two decades to reset for resilience and diversify their business models, develop new channels, embrace technology, and make themselves relevant to consumers. A few have pulled this off and merit kudos. The rest will likely soon join the growing heap of old brands that will be lost to memory in a few short years.


Work in a COVID-19 world: Back to the office won’t mean back to normal

We’re now able to say, “Okay, what might be the new normal beyond this?” We recognize that there will be re-integration back into our worksites done in the current COVID-19 environment. But beyond COVID, post-vaccines, as we think about our business continuity going forward, I do think that we will be moving into, very purposefully, a more hybrid work arrangement. That means new, innovative, in-office opportunities because we still want people to be working face-to-face and have those in-person sort of collisions, as we call them. Those you can’t do at all or they are harder to do on videoconferencing. But there can be a new balance between in-office and remote work -- and fine-tuning our own practices – that will enable us to be as effective as possible in both environments. So, no doubt, we have already started to undertake that as a post-COVID approach. We are asking what it will look like for us, and then how do we then make sure from a philosophical and a strategy perspective that the right practices are put into place to enable it.


Cloud infrastructure operators should quickly patch VMware Cloud Director flaw

The reason the flaw has not been rated critical is likely because attackers technically need authenticated access to VMware Cloud Director to exploit it. However, according to Citadelo's Zatko, that's not hard to achieve in practice since most cloud providers offer trial accounts to potential customers that involve access to the Cloud Director interface. In most cases there is no real identity verification either for such accounts, so attackers can gain easy access without providing their real identities. This highlights a larger issue with assessing risk based only on vulnerability scores: Severity scores don't always reflect or take into account the real-world conditions in which vulnerable systems might typically exist. Certain configuration or deployment choices can make a vulnerability much easier or harder to exploit than the advisory or the CVSS score suggests. Zatko is concerned that VMware Cloud Director users did not take the issue too seriously based on the advisory alone. More than two weeks after the patches had already been out, his company tested another Fortune 500 organization that used the product and it was still vulnerable.


OpenAI Announces GPT-3 AI Language Model with 175 Billion Parameters

OpenAI made headlines last year with GPT-2 and their decision not to release the 1.5 billion parameter version of the trained model due to "concerns about malicious applications of the technology." GPT-2 is one of many large-scale NLP models based on the Transformer architecture. These models are pre-trained on large text corpora, such as the contents Wikipedia, using self-supervised learning. In this scenario, instead of using a dataset containing inputs paired with expected outputs, the model is given a sequence of text with words "masked" and it must learn to predict the masked words based on the surrounding context. After this pre-training, the models are then fine-tuned with a labelled benchmark dataset for a particular NLP task, such as question-answering. However, researchers have found that the pre-trained models perform fairly well even without fine-tuning, especially for large models pre-trained on large datasets. Earlier this year, OpenAI published a paper postulating several "laws of scaling" for Transformer models.


10 open source cloud security tools to know

PacBot, also known as Policy as Code Bot, is a compliance monitoring platform. You implement your compliance policies as code, and PacBot checks your resources and assets against those policies.You can use PacBot to automatically create compliance reports and resolve compliance violations with predefined fixes. Use the Asset Group feature to organize your resources within the PacBot UI dashboard, based on certain criteria. For example, you can group all your Amazon EC2 instances by state -- such as pending, running or shutting down -- and view them together. You can also limit the scope of a monitoring action to one asset group, for more targeted compliance. PacBot was created by T-Mobile, which continues to maintain it.It can be used with AWS and Azure. ... Pacu is a penetration testing toolkit for AWS environments. It provides a red team a series of attack modules that aim to compromise EC2 instances, test S3 bucket configurations, disrupt monitoring capabilities and more. The toolkit currently has 36 plugin modules and includes built-in attack auditing for documentation and test timeline purposes. Pacu is written in Python and maintained by Rhino Security Labs, a penetration testing provider.



NIS security regulations proving effective, but more work to do

The government said it now plans to make some technical changes to the regulatory regime to ensure it remains proportionate and targeted and will be considering a number of amendments to be taken up. These changes are likely to centre on cost recovery, to better enable competent authorities to conduct regulatory activity; the implantation of an improved appeals mechanism; more clarity around the wider enforcement regime; the introduction of support to manage risks to organisational supply chains; the introduction of best-practice sharing; and a number of measures to account for any changes that may be needed, or may become possible, after the end of the Brexit transition period. Kuan Hon, a director in the technical team at law firm Fieldfisher, said that based on the statistics presented in the report, there had clearly been very limited enforcement of the NIS regulations so far, with no fines having been levied, and fewer incidents reported to regulators than DCMS anticipated. However, she added, compliance and incident reporting costs had been much higher than first expected.


Cisco takes aim at supporting SASE

Reed stated that secure access and optimal performance are a must. “The rapid adoption of SD-WAN for connecting to multi-cloud applications provides enterprises with the opportunity to rethink how access and security are managed from campus to cloud to edge,” he stated. “With 60% of organizations expecting the majority of applications to be in the cloud by 2021 and over 50% of the workforce to be operating remotely, new networking and security models such SASE offer a new way to manage the new normal.” According to Reed, the goal of SASE is to provide secure access to applications and data from on-premises data centers or cloud platforms, with access determined by identities that are defined by combinations of characteristics including individuals, groups, locations, devices, and services. Service edge refers to global points of presence (PoP), IaaS, or colocation facilities where local traffic from branches and endpoints is secured and forwarded to the appropriate destination without first traveling through corporate data centers. By delivering security and networking services together from the cloud, organizations will be able to securely connect any user or device to any application and optimize user experience, Reed stated.


Causes of Memory Leaks in JavaScript and How to Avoid Them

The fastest way for a memory usage check is to take a look at the browser Task Managers (not to be confused with the operating system's Task Manager). They provide us with an overview of all tabs and processes currently running in the browser. Chrome's Task Manager can be accessed by pressing Shift+Esc on Linux and Windows, while the one built into Firefox by typing about:performance in the address bar. Among other things, they allow us to see the JavaScript memory footprint of each tab. If our site is just sitting there and doing nothing, but yet, the JavaScript memory usage is gradually increasing, there’s a good chance we have a memory leak going on. Developer Tools are providing more advanced memory management methods. By recording in Chrome's Performance tool, we can visually analyze the performance of a page as it's running. Some patterns are typical for memory leaks, like the pattern of increasing heap memory use shown below. Other than that, both Chrome and Firefox Developer Tools have excellent possibilities to further explore memory usage with the help of the Memory tool. 



Quote for the day:

"Leadership is a process of mutual stimulation which by the interplay of individual differences controls human energy in the pursuit of a common goal." -- P. Pigo

Daily Tech Digest - June 01, 2020

The Cybersecurity Implications of 5G Technology

Since one of the chief benefits envisioned for 5G is the ability to connect more and more devices to the IoT, this “also increases the threat vectors for hackers,” according to HackerNoon.com. Another potential “worst-case scenario” outlined by HackerNoon: “Faster networks can also mean faster ways for viruses and malware to spread. If more users are on the network, then you also have the potential for more infected devices and systems than ever before.” Commenting on the concern that a greatly expanded IoT multiplies the potential points of entry for cyberattacks in an article titled “5G Dangers: What are the Cybersecurity Implications?” Heimdal Security notes that, “5G technology could also lead to botnet attacks, which will spread at a much higher speed than the current networks allow it.” Of particular relevance to the cybersecurity community, the dawn of the 5G era demands that new and improved defenses and cybersecurity protocols be developed and put in place to counter the potential risks. This means the current and future work of many cybersecurity professionals will be inextricably connected to understanding and defending against the new security risks, both known and unknown, posed by this rapidly emerging technological breakthrough.


Quantum AI is still years from enterprise prime time

For quantum AI to mature into a robust enterprise technology, there will need to be a dominant framework for developing, training, and deploying these applications. Google’s TensorFlow Quantum is an odds-on favorite in that regard. Announced this past March, TensorFlow Quantum is a new software-only stack that extends the widely adopted TensorFlow open source AI library and modeling framework. TensorFlow Quantum brings support for a wide range of quantum computing platforms into one of the dominant modeling frameworks used by today’s AI professionals. Developed by Google’s X R&D unit, it enables data scientists to use Python code to develop quantum ML and DL models through standard Keras functions. It also provides a library of quantum circuit simulators and quantum computing primitives that are compatible with existing TensorFlow APIs. Developers can use TensorFlow Quantum for supervised learning on such AI use cases as quantum classification, quantum control, and quantum approximate optimization. They can execute advanced quantum learning tasks such as meta-learning, Hamiltonian learning, and sampling thermal states.


How managed threat hunting helps bust malicious insiders

Alicia first observed an employee apparently hacking their own laptop in order to obtain local admin credentials. This was done using a technique known as sticky keys, actually an accessibility feature built into Windows that can be launched with a specific key combination from the login screen. “Although the technique is referred to as sticky keys, it is actually referring to exploiting the way certain versions of Windows will execute applications designed for accessibility features,” said Lee. “In vulnerable versions of Windows, when these accessibility features are launched via a set of key combinations (shift five times for sticky keys, press ‘Windows+U’ for Windows Utility Manager, etc.), Windows will simply launch the associated application from a hardcoded path in a privileged state. “The adversary exploiting this feature can simply replace the application binary with one of their choosing. As long as the filepath and filename are the specified ones for the shortcut key combination, Windows will execute it. This technique is fairly well-known as a way to recover Windows passwords and has been used by adversaries in the past.”


What is edge computing? Here's why the edge matters and where it's headed

In a modern communications network designed for use at the edge — for example, a 5G wireless network — there are two possible strategies at work: Data streams, audio, and video may be received faster and with fewer pauses (preferably none at all) when servers are separated from their users by a minimum of intermediate routing points, or "hops." Content delivery networks (CDN) from providers such as Akamai, Cloudflare, and NTT Communications and are built around this strategy; and Applications may be expedited when their processors are stationed closer to where the data is collected. This is especially true for applications for logistics and large-scale manufacturing, as well as for the Internet of Things (IoT) where sensors or data collecting devices are numerous and highly distributed. Depending on the application, when either or both edge strategies are employed, these servers may actually end up on one end of the network or the other. Because the Internet isn't built like the old telephone network, "closer" in terms of routing expediency is not necessarily closer in geographical distance. 


Public speaking for technical pros: How to deliver a great in-person or virtual presentation

There's standing up at stand up, there's doing an all hands demo, then there's doing a small meetup, there is doing a small conference, multi-speaker small talk at a multi-track conference. There's doing a talk and a single track conference. There's this whole escalation and a lot of the levels above meetup are not a different skillset, but a skillset that you would need to focus on and work on. You have to learn to do a CFP, you have to learn to put together a slide deck. You have to learn to, there's a whole bunch of stuff around that. And so that's sort of a separate question, but I think to start out, the things that you need to understand are that everybody in the audience is on your side. A lot of people give this really old speaking advice about imagine your audience naked and then you don't respect them anymore. And I think that's terrible on several levels. Please don't imagine anybody naked. What I want you to do is imagine that they are sitting in this meeting because they want to hear from you. They want you to succeed and if you have a problem, they are empathizing with the problem.


10 Coding Principles Every Programmer Should Learn

There are two general ways to reuse the code you have already written, Inheritance and Composition; both have their own advantage and disadvantages, but, in general, you should always favor composition over inheritance, if possible. Composition allows changing the behavior of a class at run-time by setting property during run-time, and by using Interfaces to compose a class, we use polymorphism, which provides flexibility to replace with better implementation at any time. Even Joshua Bloch’s Effective Java advise favoring composition over inheritance. If you are still not convinced, then you can also read here to learn more about why your Composition is better than Inheritance for reusing code and functionality. And, if you keep forgetting this rule, here is an excellent cartoon to put in your desk :-) If you are interested in learning more about Object-Oriented Programming Concepts like Composition, Inheritance, Association, Aggregation, etc., you can also take a look at the Object-Oriented Programming in Java course on Coursera.



Extensible Effects in JavaScript for Fun and Profit

Extensible Effects, broadly speaking, is the idea that you can separate the 'what' and 'how' in your code. By representing effects as 'tokens' that hold no intrinsic implementation details, you can write programs that are completely unaware of how they'll eventually interact with their environment. Later on these effects can be 'interpreted' by converting each token into specific actions of your choice. These effects could be general, such as 'send network request', or domain specific, like 'log user out' - it's up to you. For those unfamiliar with monads, you can think of this technique as dependency injection for your software's API calls. You program to an interface, and can provide a different implementation depending on the situation. Extensible effects are implemented via a Freer monad. This is a nested data structure of an initial effect or value, and a sequence of functions that convert the results of a previous effect into the next. When applied to an interpreter function that converts effects into the target monad of your choice, it unwraps from the 'inside out' - the first effect is converted into the target monad, which is mapped into the next effect-containing Freer monad.


Microservices: A cheat sheet

Comparisons are frequently made between microservices and service-oriented architecture (SOA). While the two may seem similar at first glance, they're nearly completely different except in the most basic ways. Both SOA and microservices involve the creation of small components that communicate data to other services, but the scope, purpose, and how the communication occurs are completely different. For starters, SOA is an enterprise-wide architecture, whereas microservice architecture is a way to build a single application. The idea behind SOA is to create a common framework for communication that allows applications, data sources, and other network-connected elements to communicate in a platform-agnostic manner.  SOA wants communication between elements to happen fast, smooth, and without barriers; this is a radical difference from microservices, which want independent elements that aren't dependent on each other at all. SOA integrations are reused constantly—that's the goal of SOA, according to IBM. In the case of microservices, reuse is completely undesirable--if a component is being called in more than one place by its main application, agility and resilience will suffer.


The Four Data Management Mistakes Derailing Your BI Program

There are a number of ways this can happen to a company. When folks come to us looking for a reporting solution to meet their customers’ needs (such as a BI solution designed to be embedded into SaaS applications), they’re not setting up the database in the same step. They’ve already been collecting data for a long time — long before reporting was even a consideration, in most cases. Sometimes we discover that the person who initially set up the database doesn’t even work at the organization anymore and didn’t leave much in the way of documentation or tribal knowledge to help onboard a successor. Other times, responsibility for (and knowledge of) the data is distributed throughout the company. One group might have a deep understanding of the data’s semantics while another, such as IT, might have some insight into its maintenance and traffic capacity. A third group responsible for data analysis might be most familiar with its utility to stakeholders. Unfortunately, none of these groups have a grasp of the database’s structure or complete knowledge of the data itself.


DataOps: The Path to AI-Readiness

Every business has a unique vision or goal for AI, whether it’s improving predictions, automating mundane tasks, freeing up employees to do more fulfilling work, or optimizing processes. But in many cases, there’s no better purpose for AI than in understanding your environment, what your systems are saying through their data, and discovering issues before they snowball into full blown outages. Organizations use about $26.5 billion in revenue because of IT system outages. IBM’s Watson AIOps understands the systems, normal system behaviors, and acceptable ranges, and provides alerts when a problem arises. In effect, it’s a nervous system that allows CIOs to effectively manage all of their systems. Given that data scientists lament limited data access and the lack of a line of sight between data and all team members, a solution such as this becomes a facilitator for faster, proactive responsiveness. ... AI-enabled automation is integral to DataOps for more than just manual steps; for governance processes, data curation, metadata assignment, and ensuring data is available for self-service. This helps to operationalize consistent high quality data throughout the entire enterprise.



Quote for the day:

"The secret of a leader lies in the tests he has faced over the whole course of his life and the habit of action he develops in meeting those tests." -- Gail Sheehy