Daily Tech Digest - September 30, 2021

How IBM lost the cloud

At first, Genesis still lacked support for the key virtual private cloud technology that both engineers and salespeople had identified as important to most prospective cloud buyers. This caused a split inside IBM Cloud: A group headed by the former Verizon executives continued to work on the Genesis project, while another group, persuaded by a team from IBM Research that concluded Genesis would never work, began designing a separate infrastructure architecture called GC that would achieve the scaling goals and include the virtual private cloud technology using the original SoftLayer infrastructure design. Genesis would never ship. It was scrapped in 2017, and that team began work on its own new architecture project, internally called NG, that ran in parallel to the GC effort. For almost two years, two teams inside IBM Cloud worked on two completely different cloud infrastructure designs, which led to turf fights, resource constraints and internal confusion over the direction of the division.

How Can Artificial Intelligence Transform Software Testing?

The increase of automated testing has coincided with the acceptance of agile methodologies in software development. This allows the QA specialists group to deliver error-free and robust software in small batches. Manual test is restricted to business acceptance test merely. DevOps test along with Automation helps agile groups to ship a guaranteed product for SaaS/ cloud deployment through a Continuous Integration/ Continuous Delivery pipeline. In software testing, Artificial Intelligence is a blend of machine learning, cognitive automation, reasoning, analytics, and natural language processing. Cognitive automation leverages several technological approaches such as data mining, semantic technology, text analytics, machine learning, and natural language processing. For instance, Robotic Process Automation (RPA) is one such connecting link between Artificial Intelligence and Cognitive Computing.

GriftHorse Money-Stealing Trojan Takes 10M Android Users for a Ride

The creators of the apps have employed several novel techniques to help the apps stay off the radar of security vendors, the analysis found. In addition to the no-reuse policy for URLs mentioned above, the cybercriminals are also developing the apps using Apache Cordova. Cordova allows developers to use standard web technologies – HTML5, CSS3 and JavaScript – for cross-platform mobile development – which in turn allows them to push out updates to apps without requiring user interaction. “[This] technology can be abused to host the malicious code on the server and develop an application that executes this code in real-time,” according to Zimperium. “The application displays as a web page that references HTML, CSS, JavaScript and images.” The campaign is also supported with a sophisticated architecture and plenty of encryption, which makes detection more difficult, according to the researchers. For instance, when an app is launched, the encrypted files stored in the “assets/www” folder are decrypted using AES.

How to Decide in Self-Managed Projects - a Lean Approach to Governance

If the people in the project can make decisions themselves, we can call it self-managed. By “self-managed” (or self-organized), I mean that the project members can make decisions about the content of the work, and also who does what and by when. Self-managed groups have the advantage that those who do the work are closer to the decisions: decisions are better grounded in operations, and there is more buy-in and deeper insight from those who are going to carry out the tasks into how the tasks fit into the bigger picture. One step further would be to have a self-governed project. ... The trick is to use lean governance, intentionally and in our favor. The goal of governance in a new project is to provide just enough structure to operate well. Just enough team structure to have a clear division of labor. Just enough meeting structure to use our time well. Not more but also not less. That level of “just enough,” of course, depends on the phase of the project.

Citi’s big idea: central, commercial banks use shared DLT for “digital money format war”

The concept involves creating a blockchain with tiers and partitions, on which central banks perform the same current role dealing with commercial banks. On the same ledger, commercial banks and emoney providers perform similar activities as they do now with their clients. Given this is how things work today and most legislation is technology agnostic, it likely wouldn’t require legislative changes and may dispense with the need for CBDCs. In Mclaughlin’s view, the debates around central bank digital currency (CBDC) frame the conversation as public versus private money. An alternative perspective is to look at regulated versus unregulated money. The concept also addresses bank coins or settlement tokens. “If we as commercial banks think that the right thing to do is for each of us to create our own coins, again, the regulated sector will be fragmented. And that will not help in the contest between regulated money and non-regulated money,” said Mclaughlin. Central bank money, commercial bank money and emoney are all regulated and represent specific legal liabilities, no matter their technical form.

Major Quantum Computing Strategy Suffers Serious Setbacks

The key to quantum computing is that, during the computation, you must avoid revealing what information your qubits encode: If you look at a bit and say that it holds a 1 or a 0, it becomes merely a classical bit. So you must shield your qubits from anything that could inadvertently reveal their value. (More strictly, decide their value — for in quantum mechanics this only happens when the value is measured.) You need to stop such information from leaking out into the environment. That leakage corresponds to a process called quantum decoherence. The aim is to carry out quantum computing before decoherence can take place, since it will corrupt the qubits with random errors that will destroy the computation. Current quantum computers typically suppress decoherence by isolating the qubits from their environment as well as possible. The trouble is, as the number of qubits multiplies, this isolation becomes extremely hard to maintain: Decoherence is bound to happen, and errors creep in.

Apple Pay-Visa Vulnerability May Enable Payment Fraud

The vulnerabilities were detected in iPhone wallets where Visa cards were set up in "express transit mode," the researchers say. The transit mode feature, launched in May 2019, enables commuters to make contactless mobile payments without fingerprint authentication. Threat actors can use the vulnerability to bypass the Apple Pay lock screen and illicitly make payments using a Visa card from a locked iPhone to any contactless Europay, Mastercard and Visa - or EMV - reader, for any amount, without user authorization, the researchers say. Information Security Media Group could not immediately ascertain the number of users affected by this vulnerability. "The weakness lies in the Apple Pay and Visa systems working together and does not affect other combinations, such as Mastercard in iPhones, or Visa on Samsung Pay," the researchers note. The researchers, who come from the University of Birmingham’s School of Computer Science and the University of Surrey’s Department of Computer Science, found the flaw as part of a project dubbed TimeTrust.

How much trust should we place in the security of biometric data?

Whilst the collection of fingerprint data is very convenient for the border control forces, how convenient is it for the asylum seekers themselves? Could they be opening themselves up to greater risks by providing their data? A potential issue here is the amount of trust that people place in fingerprints. People assume that fingerprints are an infallible method of identification. Whilst the chance of two people having matching fingerprints is infinitesimally small, automated matching systems often do not make use of the entire fingerprint. Different levels of detail can be used in matching, with differing levels of reliability. When asked to provide your fingerprints for identification purposes, how often do we consider how the matching is performed? Whilst standards exist for the robustness of fingerprint matching when used within the Criminal Justice System, can we assume that the same standards apply to border control systems? Generally, the fewer comparison points to be analyzed, the faster the matching system; in a border control situation where a large quantity of people are being processed, it is important to understand how much of a trade-off between speed and accuracy has occurred. 

The New Security Basics: 10 Most Common Defensive Actions

The current assessments found that the growing number of public incidents of ransomware attacks and attacks on the software supply chain, such as the compromise of remote management software maker Kaseya, have companies more focused on activities designed to prevent or mitigate incidents. Over the past two years, 61% more companies have actively sought to identify open source — 74 this year versus 46 two years ago — while 55 companies have begun to mandate boilerplate software license agreements, an increase of 57% compared with two years ago. "Over the last 18 months, organizations experienced a massive acceleration of digital transformation initiatives," said Mike Ware, information security principal at Navy Federal Credit Union, a member organization of the BSIMM community, in a statement. "Given the complexity and pace of these changes, it's never been more important for security teams to have the tools which allow them to understand where they stand and have a reference for where they should pivot next."

Cycle Time Breakdown: Reducing Pull Request Pickup Time

There’s nothing worse than working hard for a day or two on a difficult piece of code, creating a pull request for it, and having no one pay attention or even notice. It’s especially frustrating if you specifically assign the Pull Request to a teammate. It’s a bother to have to remember to send emails or slack messages to fellow team members to get them to do a review. No one wants to be a distraction, but the work has to be done, right? So naturally, the conscientious Dev Manager will want to pay close attention to Pull Request Pickup Time (PR Pickup Time), the second segment of a project’s journey along the Cycle Time path. (Go here for the blog post about the first segment, Coding Time) She’ll want to make sure those frustrations described above don’t occur. Keeping Cycle Time “all green” is the goal, but this is often difficult because there are a lot of moving parts that go into managing Cycle Time, including PR Pickup Time.

Quote for the day:

"Leaders must see the dream in their mind before they will accomplish the dream with their team." -- Orrin Woodward

Daily Tech Digest - September 29, 2021

Approaching Anomaly Detection in Transactional Data

Usually, people mean financial transactions when they talk about transactional data. However, according to Wikipedia, “Transactional Data is data describing an event (the change as a result of a transaction) and is usually described with verbs. Transaction data always has a time dimension, a numerical value and refers to one or more objects”. In this article, we will use data on requests made to a server (internet traffic data) as an example, but the considered approaches can be applied to most of the datasets falling under the aforementioned definition of transactional data. Anomaly Detection, in simple words, is finding data points that shouldn’t normally occur in a system that generated data. Anomaly detection in transactional data has many applications, here are a couple of examples: Fraud detection in financial transactions; Fault detection in manufacturing; Attack or malfunction detection in a computer network (the case covered in this article); Recommendation of predictive maintenance; and Health condition monitoring and alerting.

Apache Kafka: Core Concepts and Use Cases

The initial point that each and every individual who works with streaming applications ought to comprehend is the concept, which is a diminutive piece of data. For instance, when a user registers within the system, an event is created. You can likewise ponder on an event like a message with data, which can be processed and saved at a certain place if at all required. This event is the message wherein the data regarding details such as the user’s name, email, password, and so forth can be added. This highlights that Kafka is the platform that works well when it comes to streaming events. Events are continually composed by producers. They are called producers since they compose events or data to Kafka. There are numerous sorts of producers. Instances of clients include web servers, parts of applications, whole applications, IoT gadgets, monitoring specialists, and so on. A new user registration event can be produced by the component of the site that is liable for client registrations. 

How to Build a Regression Testing Strategy for Agile Teams

Regression testing is a process of testing the software and analyzing whether the change of code, update, or improvements of the application has not affected the software’s existing functionality. Regression testing in software engineering ensures the overall stability and functionality of existing features of the software. Regression testing ensures that the overall system stays sustainable under continuous improvements whenever new features are added to the code to update the software. Regression testing helps target and reduce the risk of code dependencies, defects, and malfunction, so the previously developed and tested code stays operational after the modification. Generally, the software undergoes many tests before the new changes integrate into the main development branch of the code. ... Automated regression testing is mainly used with medium and large complex projects when the project is stable. Using a thorough plan, automated regression testing helps to reduce the time and efforts that a tester spends on tedious and repeatable tasks and can contribute their time that requires manual attention like exploratory tests and UX testing.

Sam Newman on Information Hiding, Ubiquitous Language, UI Decomposition and Building Microservices

The ubiquitous language in many ways is the key stone of domain-driven design and it's amazing how many people skip it, and it's foundational. I think a lot of the reason that people skip ubiquitous language is because to understand what terms and terminology are used by the business side of your organization by the use of your software, it involves having to talk to people. It still stuns me how many enterprise architects have come up with a domain model by themselves without ever having spoken to anybody outside of IT. So this fundamentally, the ubiquitous language starts with having conversations. This is why I like event storming as a domain-driven design technique because it places primacy on having that kind of collective brainstorming activity where you get sort of maybe your non-developer, your non-technical stakeholders in the room and listen to what they're talking about and you're picking up their terms, their terminology, and you're trying to put those terms into your code.

Technical architecture: What IT does for a living

Technical architecture is the sum and substance of what IT deploys to support the enterprise. As such, its management is a key IT practice. We talked about how to go about it in a previous article in this series. Which leads to the question, What constitutes good technical architecture? Or more foundationally, What constitutes technical architecture, whether good, bad, or indifferent? In case you’re a purist, we’re talking about technical architecture, not enterprise architecture. The latter includes the business architecture as well as the technical architecture. Not that it’s possible to evaluate the technical architecture without understanding how well it supports the business architecture. It’s just that managing the health of the business architecture is Someone Else’s Problem. IT always has a technical architecture. In some organizations it’s deliberate, the result of processes and practices that matter most to CIOs. But far too often, technical architecture is accidental — a pile of stuff that’s accumulated over time without any overall plan.

Preparing for the 'golden age' of artificial intelligence and machine learning

"Implementing an AI solution is not easy, and there are many examples of where AI has gone wrong in production," says Tripti Sethi, senior director at Avanade. "The companies we have seen benefit from AI the most understand that AI is not a plug-and-play tool, but rather a capability that needs to be fostered and matured. These companies are asking 'what business value can I drive with data?' rather than 'what can my data do?'" Skills availability is one of the leading issues that enterprises face in building and maintaining AI-driven systems. Close to two-thirds of surveyed enterprises, 62%, indicated that they couldn't find talent on par with the skills requirements needed in efforts to move to AI. More than half, 54%, say that it's been difficult to deploy AI within their existing organizational cultures, and 46% point to difficulties in finding funding for the programs they want to implement. ... In recent months and years, AI bias has been in the headlines, suggesting that AI algorithms reinforce racism and sexism. 

Skilling in the IT sector for a post pandemic era – An Experts View

“When there’s a necessity, innovations follow,” said Mahipal Nair (People Development & Operations Leader, NielsenIQ). The company moved from people-interaction-dependent learning to digital methods to navigate skilling priorities. As consumer expectations change, leadership and social skills have become a priority for workplace performance. “The way to solve this is not just to transform current talent, but create relevant talent,” said Nilanjan Kar (CRO, Harappa). Echoing the sentiment, Kirti Seth (CEO, SSC NASSCOM) added that “learning should be about principles, and it should enable employees to make the basics their own.” This will help create a learning organization that can contextualize change across the industry to stay relevant and map the desired learning outcomes. While companies upskill their workforce on these priorities, the real question is what skills will be required? Anupal Banerjee (CHRO, Tata Technologies) noted that “with the change in skills, there are multiple levels to focus on. While one focus area is on technical skills, the second is on behavioral skills. ...”.

Re-evaluating Kafka: issues and alternatives for real-time

By nature, your Kafka deployment is pretty much guaranteed to be a large-scale project. Imagine operating an equally large-scale MySQL database that is used by multiple critical applications. You’d almost certainly need to hire a database administrator (or a whole team of them) to manage it. Kafka is no different. It’s a big, complex system that tends to be shared among multiple client applications. Of course it’s not easy to operate! Kafka administrators must answer hard design questions from the get-go. This includes defining how messages are stored in partitioned topics, retention, and team or application quotas. We won’t get into detail here, but you can think of this task as designing a database schema, but with the added dimension of time, which multiplies the complexity. You need to consider what each message represents, how to ensure it will be consumed in the proper order, where and how to enact stateful transformations, and much more — all with extreme precision.

Climbing to new heights with the aid of real-time data analytics

Enter hybrid analytics. The world of data management has been reimagined with analytics at the speed of transactions made possible, through simpler processes, and a single hybrid system breaking down the walls between transactions and analytics. It’s possible through hybrid analytics to avoid the movement of information from databases to data warehouses and allow simple real-time data processing. This innovation enables enhanced customer experiences and a more data-driven approach to decision making thanks to the deeper business insights delivered through a hybrid system. Thanks to hybrid analytics, real-time allows a faster time to insight. It’s also possible for businesses to better understand their customers with no long, complex processes while the feedback loop is also made shorter for increased efficiency. It’s this approach that delivers a data-driven competitive advantage for businesses. Both developers and database administrators can access and manage data far easier, only having to deal with one connected system with no database sprawl.

Why DevSecOps fails: 4 signs of trouble

When Haff says that some organizations make the mistake of not giving DevSecOps its due, he adds that the people and culture component is most often the glaring omission. Of course, it’s not actually “glaring” until you realize that your DevSecOps initiative has fallen flat and you start to wonder why. One way you might end up traveling this suboptimal path: You focus too much on technology as the end-all solution rather than a layer in a multi-faceted strategy. “They probably have adopted at least some of the scanning and other tooling they need to mitigate various types of threats. They’re likely implementing workflows that incorporate automation and interactive development,” Haff says. “What they’re less likely paying less attention to – and may be treating as an afterthought – is people and culture.” Just as DevOps was about more than a toolchain, DevSecOps is about more than throwing security technologies at various risks. “An organization can get all the tools and mechanics right but if, for example, developers and operations teams don’t collaborate with your security experts, you’re not really doing DevSecOps,” Haff says.

Quote for the day:

"Authentic leaders are often accused of being 'controlling' by those who idly sit by and do nothing" --John Paul Warren

Daily Tech Digest - September 28, 2021

How and why automation can improve network-device security

Automating the processes of device discovery and configuration validation allows you to enforce good network security by making sure that your devices and configurations not accidentally leaving any security holes open. Stated differently, the goal of automation is to guarantee that your network policies are consistently applied across the entire network. A router that’s forgotten and left unsecured could be the avenue that bad actors exploit. Once each device on the network is discovered, the automation system downloads its configurations and checks them against the configuration rules that implement your network policies. These policies range from simple things that are not security related, like device naming standards, to essential security policies like authentication controls and access control lists. The automation system helps deploy and maintain the configurations that reflect your policies. ... A network-change and configuration-management (NCCM) system can use your network inventory to automate the backup of network-device configurations to a central repository.

How Unnecessary Complexity Gave the Service Mesh a Bad Name

The difficulty comes from avoiding “retry storms” or a “retry DDoS,” which is when a system in a degraded state triggers retries, increasing load and further decreasing performance as retries increase. A naive implementation won’t take this scenario into account as it may require integrating with a cache or other communication system to know if a retry is worth performing. A service mesh can do this by providing a bound on the total number of retries allowed throughout the system. The mesh can also report on these retries as they occur, potentially alerting you of system degradation before your users even notice. ... The design pattern of sidecar proxies is another exciting and powerful feature, even if it is sometimes oversold and over-engineered to do things users and tech aren’t quite ready for. While the community waits to see which service mesh “wins,” a reflection of the over-hyped orchestration wars before it, we will inevitably see more purpose-built meshes in the future and, likely, more end-users building their own control planes and proxies to satisfy their use cases.

How To Deal With Data Imbalance In Classification Problems?

A classification model is a technique that tries to draw conclusions or predict outcomes based on input values given for training. The input, for example, can be a historical bank or any financial sector data. The model will predict the class labels/categories for the new data and say if the customer will be valuable or not, based on demographic data such as gender, income, age, etc. Target class imbalance is the classes or the categories in the target class that are not balanced. Rao, giving an example of a marketing campaign, said, let’s say we have a classification task on hand to predict if a customer will respond positively to a campaign or not. Here, the target column — responded has two classes — yes or no. So, those are the two categories. In this case, let’s say the majority of the people responded ‘no.’ Meaning, the marketing campaign where you end up reaching out to a lot of customers, only a handful of them want to subscribe, for example, this can be you offering a credit card, a new insurance policy, etc. The one who subscribed or is interested would request more details. 

Motivational debt — it will fix itself, right?

Motivational debt is a hidden cost to product delivery. It’s the rust that is accruing on aged PBIs, the sludge at the bottom of the Sprint Backlog and the creaking of the process when needing to do something new. Technical debt is to quality what motivational debt is to process. It’s important to remember that whilst motivational debt is shouldered by the entire Scrum Team, there is an individual element of accrual to it as well. Both short-term stresses which bounce back quickly (“I didn’t get any sleep last night”) to long-term tensions which don’t (“My parents are ill) all contribute to the motivational complexities of a Scrum Team. Moving to address these actively is an ethical quandary, as individuals have different coping mechanisms, meaning efforts to help may actually exacerbate the issue. Remember that whilst some team members may be feeling down, others may be up, therefore being conscious of the overall direction of pull is vital as a Scrum Master. Holistically, it is fair to say that motivational debt is felt both individually and collectively and it is everyone’s responsibility to create an environment where it can be minimised. But how can you do this?

Waste and inefficiency in outdated government IT systems

Those responsible for addressing the government’s current levels of wasted IT expenditure may find that businesses offer positive, proactive case studies that highlight the value of embracing digital transformation. A 2020 study from Deloitte, for instance, has found that digitally mature companies – those that have embraced various aspects of digital transformation – saw net revenue growth of 45% and net profit growths of 43% compared to industry averages. The same study has found that the benefits of digital maturation are not limited to profits, but to a range of outcomes including increased efficiency, better product and service quality, and higher levels of both customer satisfaction and employee engagement. A study from McKinsey is even more strident, noting that “by digitising information-intensive processes, costs can be cut by up to 90% and turnaround times improved by several orders of magnitude.” Part of the ‘Organising for Digital Delivery Report’ includes a commitment to “investing in developing the technical fluency of senior civil service leadership.” 

Robotic process automation and intelligent automation are accelerating, study finds

Process mining is used to obtain a wide lens over business processes and workflows within a company by examining event logs across systems, including how variable they are and where there are bottlenecks. The less variable the process, the greater its potential candidacy for RPA/IA, though other factors must be considered as well. Task mining is used to understand how a user is interacting with systems and where there are opportunities for automation. Both of the above help identify automation candidates throughout an organization. IDP is a use case of IA and is growing in popularity, as there are so many document-intensive processes across organizations that impact many employees. ... Data governance, visibility of shadow deployments (and having guardrails in place for them), and security are all important to set in place ahead of RPA/IA to ensure architectural readiness. Another challenge is ensuring that the infrastructure is able to handle the increased speed and volume of transactions related to automated processes, whether it’s their own or someone they do business with.

Importance of DevOps Automation in 2021

From a software development perspective, DevOps automation enhances the performance of the engineering teams with the help of top-notch DevOps tools. It encourages cross teams to work together by removing organizational silos. The reduced team inter-dependencies and manual processes for infrastructure management have enabled the software team to focus on frequent releases, receiving quick feedback, and improving user experience. From an organizational point of view, DevOps automation reduces the chances of human errors and saves the time used for error detection with the help of auto-healing features. Additionally, it minimizes the time required for deploying new features significantly and removes any inconsistencies caused due to human errors. Enterprises should first focus on the areas where they face the most challenges. The decision on what to automate depends on their organizational needs and technological feasibility. The DevOps automation teams should be able to analyze which areas of the DevOps lifecycle needs automation.

The biggest problem with ransomware is not encryption, but credentials

The obvious concern about being the victim of a ransomware attack is being locked out from data, applications, and systems – making organizations unable to do business. Then, there is the concern of what an attack is going to cost; the question of whether or not you need to pay the ransomware is being forced by cybercriminal gangs, as 77% of attacks also included the threat of leaking exfiltrated data. Next are the issues of lost revenue, an average of 23 days of downtime, remediation costs, and the impact on the businesses’ reputation. But those are post-attack concerns, and you should, first and foremost, be laser-focused on what effective measures you can you take to stop ransomware attacks. Organizations that are truly concerned about the massive growth in ransomware are working to understand the tactics, techniques and procedures used by threat actors to craft preventative, detective and responsive measures to either mitigate the risk or minimize the impact of an attack. Additionally, these organizations are scrutinizing the technologies, processes and frameworks they have in place, as well as asking the same of their third-party supply chain vendors.

If your organization is looking to hire data engineers in the next 12 months, be prepared to move quickly in your hiring process and think carefully before you waste time negotiating salaries. That’s some of the advice for hiring managers from the first edition of Salaries of Data Engineering Professionals from the quantitative executive recruiting firm Burtch Works. Known for its work with data scientists and analytics professionals, and its annual salary surveys that look at the employment trends for those professionals, this year, Burtch Works has expanded by offering this new survey for data engineers, conducted in individual interviews with 320 of these professionals based in the United States. The survey looks at salaries, demographics, and trends among data engineers. What is a data engineer? These are the professionals responsible for building and managing the data and IT infrastructure that sits between the data sources and the data analytics. They report into the IT department, the data science department, or both. According to the Burtch Works survey, these professionals command a high rate of pay.

Data And Analytics In Healthcare: Addressing the 21st-century Challenges

Scientists have claimed victory against future diseases after successfully decoding the human genome. The marriage of this knowledge to the health data generated by patients would enable clinicians to make better decisions about our care. The two benefits of using predictive analytics: better care and lower costs. The biggest lesson of the recent global health issues such as COVID-19, SARS, dengue and malaria outbreaks is that pharma and healthcare companies cannot afford merely to react to every emerging situation. They need to track several data streams of local, regional, and global trends, create a database, and then predict various scenarios. Data analytics helps companies develop their predictive models, enabling them to make quicker, intelligent decisions, build partnerships, and resolve bottlenecks before the crisis hits the shore. Such data-driven measures aim to save invaluable lives and allow care to be personalized for each individual. Predictive analytics can classify particular risk factors for diverse populations. This is very useful for patients suffering from multiple ailments with complex medical histories. 

Quote for the day:

"Every great leader has incredible odds to overcome." -- Wayde Goodall

Daily Tech Digest - September 27, 2021

How to Get Started With Zero Trust in a SaaS Environment

While opinions vary on what zero trust is and is not, this security model generally considers the user's identity as the root of decision-making when determining whether to allow access to an information resource. This contrasts with earlier approaches that made decisions based on the network from which the person was connecting. For example, we often presumed that workers in the office were connecting directly to the organization's network and, therefore, could be trusted to access the company's data. Today, however, organizations can no longer grant special privileges based on the assumption that the request is coming from a trusted network. With the high number of remote and geographically dispersed employees, there is a good chance the connections originate from a network the company doesn't control. This trend will continue. IT and security decision-makers expect remote end users to account for 40% of their workforce after the COVID-19 outbreak is controlled, an increase of 74% relative to pre-pandemic levels, according to "The Current State of the IT Asset Visibility Gap and Post-Pandemic Preparedness," with research conducted by the Enterprise Strategy Group for Axonius.

Tons Of Data At The Company Store

Confidentially, many chief data officers will admit that their companies suffer from what might euphemistically be called “data dyspepsia:” they produce and ingest so much data that they cannot properly digest it. Like it or not, there is such a thing as too much data – especially in an era of all-you-can-ingest data comestibles. “Our belief is that more young companies die of indigestion than starvation,” said Adam Wilson, CEO of data engineering specialist Trifacta, during a recent episode of Inside Analysis, a weekly data- and analytics-focused program hosted by Eric Kavanagh. So what if Wilson was referring specifically to Trifacta’s decision to stay focused on its core competency, data engineering, instead of diversifying into adjacent markets. So what if he was not, in fact, alluding to a status quo in which the average business feels overwhelmed by data. Wilson’s metaphor is no less apt if applied to data dyspepsia. It also fits with Trifacta’s own pitch, which involves simplifying data engineering – and automating it, insofar as is practicable – in order to accelerate the rate at which useful data can be made available to more and different kinds of consumers.

Hyperconverged analytics continues to guide Tibco strategy

One of the trends we're seeing is that people know how to build models, but there are two challenges. One is on the input side and one is on the output side. On the input side, you can build the greatest models in the world, but if you feed them bad data that's not going to help. So there's a renewed interest around things like data governance, data quality and data security. AI and ML are still very important, but there's more to it than just building the models. The quality of the data, and the governance and processes around the data, are also very important. That way you get your model better data, which makes your model more accurate, and from there you're going to get better outcomes. On the output side, since there are so many models being built, organizations are having trouble operationalizing them all. How do you deploy them into production, how do you monitor them, how do you know when it's time to go back and rework that model, how do you deploy them at the edge, how do you deploy them in the cloud and how do you deploy them in an application? 

Gamification: A Strategy for Enterprises to Enable Digital Product Practices

As digital products take precedence, the software ecosystem brings new possibilities to products. With the rise of digital products, cross-functional boundaries are blurring. New skills and unlearning old ways are critical. Gamification can support creating a ladder approach to acquiring and utilizing new skills for continuous software delivery ecosystems, testing and security. However, underpinning collective wisdom through gamification needs a systematic framework where we are able to integrate game ideation, design, validation & incentives with different persona types. To apply gamification in a systematic manner to solve serious problems, ideate, and come together to create new knowledge in a fun way, is challenging. To successfully apply gamification for upskilling and boosting productivity, it will have to be accompanied by understanding the purposefulness through the following two critical perspectives: Benefits of embracing gamification for people – Removing fear, having fun, and making the desirable shift towards new knowledge; creating an environment that is inclusive and can provide a learning ecosystem for all. 

Artificial Intelligence: The Future Of Cybersecurity?

Cybersecurity in Industry 4.0 can't be tackled in the same way as that of traditional computing environments. The number of devices and associated challenges are far too many. Imagine monitoring security alerts for millions of connected devices globally. IIoT devices possess limited computing power and, therefore, lack the ability to run security solutions. This is where AI and machine learning come into play. ML can make up for the lack of security teams. AI can help discover devices and hidden patterns while processing large amounts of data. ML can help monitor incoming and outgoing traffic for any deviations in behavior in the IoT ecosystem. If a threat or anomaly is detected, alarms can be sent to security admins warning them about the suspicious traffic. AI and ML can be used to build lightweight endpoint detection technologies. This can be an indispensable solution, especially in situations where IoT devices lack the processing power and need behavior-based detection capabilities that aren't as resource intensive. AI and ML technologies are a double-edged sword. 

3 ways any company can guard against insider threats this October

Companies don’t become cyber smart by accident. In fact, cybersecurity is rarely top-of-mind for the average employee as they go about their day and pursue their professional responsibilities. Therefore, businesses are responsible for educating their workforce, training their teams to identify and defend against the latest threat patterns. For instance, phishing scams have increased significantly since the pandemic’s onset, and each malicious message threatens to undermine data integrity. Meanwhile, many employees can’t identify these threats, and they wouldn’t know how to respond if they did. Of course, education isn’t limited to phishing scams. One survey found that 61 percent of employees failed a basic quiz on cybersecurity fundamentals. With the average company spending only 5 percent of its IT budget on employee training, it’s clear that education is an untapped opportunity for many organizations to #BeCyberSmart. When coupled with intentional accountability measures that ensure training is implemented, companies can transform their unaware employees into incredible defensive assets.

VMware gears up for a challenging future

“What we are doing is pivoting our portfolio or positioning our portfolio to become the multi-cloud platform for our customers in three ways,” Raghuram said. “One is enabling them to execute their application transformation on the cloud of their choice using our Tanzu portfolio. And Tanzu is getting increased momentum, especially in the public cloud to help them master the complexities of doing application modernization in the cloud. And of course, by putting our cloud infrastructure across all clouds, and we are the only one with the cloud infrastructure across all clouds and forming the strategic partnerships with all of the cloud vendors, we are helping them take their enterprise applications to the right cloud,” Raghuram said. Building useful modern enterprise applications is a core customer concern, experts say. “Most new apps are built-on containers for speed and scalability. The clear winner of the container wars was Kubernetes,” said Scott Miller, senior director of strategic partnerships for World Wide Technology (WWT), a technology and supply-chain service provider and a VMware partner. 

Software cybersecurity labels face practical, cost challenges

Cost and feasibility are among the top challenges of creating consumer labels for software. Adding to these challenges is the fact that software is continually updated. Moreover, software comes in both open-source and proprietary formats and is created by a global ecosystem of firms that range from mom-and-pop shops all the way up to Silicon Valley software giants. "It's way too easy to create requirements that cannot be met in the real world," David Wheeler, director of open source supply chain security at the Linux Foundation and leader of the Core Infrastructure Initiative Best Practices Badge program, said at the workshop. "A lot of open-source projects allow people to use them at no cost. There's often no revenue stream. You have to spend a million dollars at an independent lab for an audit. [That] ignores the reality that for many projects, that's an impractical burden." ... Another critical aspect of creating software labels is to ensure that they don't reflect static points in time but are instead dynamic, taking into account the fluid nature of software. 

Work’s not getting any easier for parents

Part of many managers’ discomfort with remote work is that they are unsure how to gauge their off-site employees’ performance and productivity. Some business leaders equate face time with productivity. I’ll never forget a visit I had to a Silicon Valley startup in which the manager showing me around described a colleague this way: “He’s such a great worker. He’s here every night until 10, and back in early every morning!” In my work helping businesses update their policies and cultures to accommodate caregivers, I often have to rid managers of this old notion. There’s nothing impressive, or even good, about being in the office so much. To help change the paradigm, I work with managers to find new ways of measuring an individual’s performance and productivity. Instead of focusing on hours worked per day, we look at an employee’s achievements across a broader time metric, such as a month or quarter. We ask, what did the employee do for the company during that time? It’s often then that businesses realize how little overlap there is between those who are seen working the most and those who have the greatest impact on the company. 

How to use feedback loops to improve your team's performance

In systems, feedback is a fundamental force behind their workings. When we fly a plane, we get feedback from our instruments and our co-pilot. When we develop software, we get feedback from our compiler, our tests, our peers, our monitoring, and our users. Dissent works because it’s a form of feedback, and clear, rapid feedback is essential for a well functioning system. As examined in “Accelerate”, a four-year study of thousands of technology organizations found that fostering a culture that openly shares information is a sure way to improve software delivery performance. It even predicts ability to meet non-technical goals. These cultures, known as “generative” in Ron Westrum’s model of organizational culture, are performance–and learning–oriented. They understand that information, especially if it’s difficult to receive, only helps to achieve their mission, and so, without fear of retaliation, associates speak up more frequently than in rule-oriented (“bureaucratic”) or power-oriented (“pathological”) cultures. Messengers are praised, not shot.

Quote for the day:

"A pat on the back is only a few vertebrae removed from a kick in the pants, but is miles ahead in results." -- W. Wilcox

Daily Tech Digest - September 26, 2021

You don't really own your phone

When you purchase a phone, you own the physical parts you can hold in your hand. The display is yours. The chip inside is yours. The camera lenses and sensors are yours to keep forever and ever. But none of this, not a single piece, is worth more than its value in scrap without the parts you don't own but are graciously allowed to use — the copyrighted software and firmware that powers it all. The companies that hold these copyrights may not care how you use the product you paid a license for, and you don't hear a lot about them outside of the right to repair movement. Xiaomi, like Google and all the other copyright holders who provide the things which make a smartphone smart, really only wants you to enjoy the product enough to buy from them the next time you purchase a smart device. Xiaomi pissing off people who buy its smartphones isn't a good way to get those same people to buy another or buy a fitness band or robot vacuum cleaner. When you set up a new phone, you agree with these copyright holders that you'll use the software on their terms.

Edge computing has a bright future, even if nobody's sure quite what that looks like

Edge computing needs scalable, flexible networking. Even if a particular deployment is stable in size and resource requirements over a long period, to be economic it must be built from general-purpose tools and techniques that can cope with a wide variety of demands. To that end, software defined networking (SDN) has become a focus for future edge developments, although a range of recent research has identified areas where it doesn't yet quite match up to the job. SDN's characteristic approach is to divide the task of networking into two tasks of control and data transfer. It has a control plane and a data plane, with the former managing the latter by dynamic reconfiguration based on a combination of rules and monitoring. This looks like a good match for edge computing, but SDN typically has a centralised control plane that expects a global view of all network activity. ... Various approaches – multiple control planes, increased intelligence in edge switch hardware, dynamic network partitioning on demand, geography and flow control – are under investigation, as are the interactions between security and SDN in edge management.

TangleBot Malware Reaches Deep into Android Device Functions

In propagation and theme, TangleBot resembles other mobile malware, such as the FluBot SMS malware that targets the U.K. and Europe or the CovidLock Android ransomware, which is an Android app that pretends to give users a way to find nearby COVID-19 patients. But its wide-ranging access to mobile device functions is what sets it apart, Cloudmark researchers said. “The malware has been given the moniker TangleBot because of its many levels of obfuscation and control over a myriad of entangled device functions, including contacts, SMS and phone capabilities, call logs, internet access, [GPS], and camera and microphone,” they noted in a Thursday writeup. To reach such a long arm into Android’s internal business, TangleBot grants itself privileges to access and control all of the above, researchers said, meaning that the cyberattackers would now have carte blanche to mount attacks with a staggering array of goals. For instance, attackers can manipulate the incoming voice call function to block calls and can also silently make calls in the background, with users none the wiser. 

Why CEOs Should Absolutely Concern Themselves With Cloud Security

Probably the biggest reason cybersecurity needs to be elevated to one of your top responsibilities is simply that, as the CEO, you call most of the shots surrounding how the business is going to operate. To lead anyone else, you have to have a crystal-clear big picture of how everything interconnects and what ramifications threats in one area have to other areas. Additionally, it’s up to you to hire and oversee people who truly understand servers and cloud security and who can build a secure infrastructure and applications. That said, virtually all businesses today are “digital” businesses in some sense, if that means having a website, an app, processing credit cards with point of sale readers or using the ‘net for your social media marketing. All of these things can be potential points of entry for hackers, who happily take advantage of any vulnerability they can find. And with more people working remotely and generally enjoying a more mobile lifestyle, the risks of cloud computing are here to stay.

Better Incident Management Requires More than Just Data

To the uninitiated, all complexity looks like chaos. Real order requires understanding. Real understanding requires context. I’ve seen teams all over the tech world abuse data and metrics because they don’t relate it to its larger context: what are we trying to solve and how might we be fooling ourselves to reinforce our own biases? In no place is this more true in the world of incident management. Things go wrong in businesses, large and small, every single day. Those failures often go unreported, as most people see failure through the lens of blame, and no one wants to admit they made a mistake. Because of that fact, site reliability engineering (SRE) teams establishing their own incident management process often invest in the wrong initial metrics. Many teams are overly concerned with reducing MTTR: mean time to resolution. Like the British government, those teams are overly relying on their metrics and not considering the larger context. Incidents are almost always going to be underreported initially: people don’t want to admit things are going wrong.

Three Skills You’ll Need as a Senior Data Scientist

In the light of data science, I would say, critical thinking is, answering the “why”s in your data science project. Before elaborating what I mean, the most important prerequisite is, know the general flow of a data science project. The diagram below shows that. This is a slightly different view to the cyclic series of steps you might see elsewhere. I think this is a more realistic view than seeing it as a cycle. Now off to elaborating. In a data science project, there are countless decisions you have to make; supervised vs unsupervised learning, selecting raw fields of data, feature engineering techniques, selecting the model, evaluation metrics, etc. Some of these decisions would be obvious, like, if you have a set of features, and a label associated with it, you’d go with supervised learning instead of unsupervised learning. A seemingly tiny checkpoint you overlooked might be enough. And it can cost money for the company and put your reputation on the line. When you answer not just “what you’re doing”, but also “why you’re doing”, it closes down most of the cracks, where problems like above can seep in.

The Benefits and Challenges of Passwordless Authentication

Passwordless authentication is a process that verifies a user's identity with something other than a password. It strengthens security by eliminating password management practices and the risk of threat vectors. It is an emerging subfield of identity and access management and will revolutionize the way employees work. ... asswordless authentication uses some modern authentication methods that reduce the risk of being targeted via phishing attacks. With this approach, employees won't need to provide any sensitive information to the threat actors that give them access to their accounts or other confidential data when they receive a phishing email. ... Passwordless authentication appears to be a secure and easy-to-use approach, but there are challenges in its deployment. The most significant issue is the budget and migration complexity. While setting up a budget for passwordless authentication, enterprises should include costs for buying hardware and its setup and configuration. Another challenge is dealing with old-school mentalities. Most IT leaders and employees are reluctant to move away from traditional security methods and try new ones.

Using CodeQL to detect client-side vulnerabilities in web applications

The idea of CodeQL is to treat source code as a database which can be queried using SQL-like statements. There are lots of languages supported among which is JavaScript. For JavaScript both server-side and client-side flavours are supported. JS CodeQL understands modern editions such as ES6 as well as frameworks like React (with JSX) and Angular. CodeQL is not just grep as it supports taint tracking which allows you to test if a given user input (a source) can reach a vulnerable function (a sink). This is especially useful when dealing with DOM-based Cross Site Scripting vulnerabilities. By tainting a user-supplied DOM property such as location.hash one can test if this value actually reaches one of the XSS sinks, e.g. document.innerHTML or document.write(). The common use-case for CodeQL is to run a query suite against open-source code repositories. To do so you may install CodeQL locally or use https://lgtm.com/. For the latter case you should specify a GitHub repository URL and add it as your project. 

Moving beyond agile to become a software innovator

Experience design is a specific capability focused on understanding user preferences and usage patterns and creating experiences that delight them. The value of experience design is well established, with organizations that have invested in design exceeding industry peers by as much as 5 percent per year in growth of shareholder return. What differentiates best-in-class organizations is that they embed design in every aspect of the product or service development. As a core part of the agile team, experience designers participate in development processes by, for example, driving dedicated design sprints and ensuring that core product artifacts, such as personas and customer journeys, are created and used throughout product development. This commitment leads to greater adoption of the products or services created, simpler applications and experiences, and a substantial reduction of low-value features. ... Rather than approaching it as a technical issue, the team focused on addressing the full onboarding journey, including workflow, connectivity, and user communications. The results were impressive. The team created a market-leading experience that enabled their first multimillion-dollar sale only four months after it was launched and continued to accelerate sales and increase customer satisfaction.

The relationship between data SLAs & data products

The data-as-a-product model intends to mend the gap that the data lake left open. In this philosophy, company data is viewed as a product that will be consumed by internal and external stakeholders. The data team’s role is to provide that data to the company in ways that promote efficiency, good user experience, and good decision making. As such, the data providers and data consumers need to work together to answer the questions put forward above. Coming to an agreement on those terms and spelling it out is called a data SLA. An SLA stands for a service-level agreement. An SLA is a contract between two parties that defines and measures the level of service a given vendor or product will deliver as well as remedies if they fail to deliver. They are an attempt to define expectations of the level of service and quality between providers and consumers. They’re very common when an organization is offering a product or service to an external customer or stakeholder, but they can also be used between internal teams within an organization.

Quote for the day:

"If you can't handle others' disapproval, then leadership isn't for you." -- Miles Anthony Smith

Daily Tech Digest - September 25, 2021

Top 5 Objections to Scrum (and Why Those Objections are Wrong)

Many software development teams are under pressure to deliver work quickly because other teams have deadlines they need to meet. A common objection to Agile is that teams feel that when they have a schedule to meet, a traditional waterfall method is the only way to go. Nothing could be further from the truth. Not only can Scrum work in these situations, but in my experience, it increases the probability of meeting challenging deadlines. Scrum works well with deadlines because it’s based on empiricism, lean thinking, and an iterative approach to product delivery. In a nutshell, empiricism is making decisions based on what is known. In practice, this means that rather than making all of the critical decisions about an initiative upfront, when the least is known, Agile initiatives practice just-in-time decision-making by planning smaller batches of work more often. Lean thinking means eliminating waste to focus only on the essentials, and iterative delivery involves delivering a usable product frequently.

The Future Is Data Center as a Service

The fact is that whether we realize it or not, we’ve gotten used to thinking of the data center as a fluid thing, particularly if we use cluster paradigms such as Kubernetes. We think of pods like tiny individual computers running individual applications, and we start them up and tear them down at will. We create applications using multicloud and hybrid cloud architectures to take advantage of the best situation for each workload. Edge computing has pushed this analogy even further, as we literally spin up additional nodes on demand, with the network adjusting to the new topology. Rightfully so; with the speed of innovation, we need to be able to tear down a data center that is compromised or bring up a new one to replace it, or to enhance it, at a moment’s notice. In a way, that’s what we’ve been doing with public cloud providers: instantiating “hardware” when we need it and tearing it down when we don’t. We’ve been doing this on the cloud providers’ terms, with each public cloud racing to lock in as many companies and workloads as possible with a race to the bottom on cost so they can control the conversation.

DevSecOps: 5 ways to learn more

There’s a clear connection between DevSecOps culture and practices and the open source community, a relationship that Anchore technical marketing manager Will Kelly recently explored in an opensource.com article, “DevSecOps: An open source story.” As you build your knowledge, getting involved in a DevSecOps-relevant project is another opportunity to expand and extend your experience. That could range from something as simple as joining a project’s community group or Slack to ask questions about a particular tool, to taking on a larger role as a contributor at some point. The threat modeling tool OWASP Threat Dragon, for example, welcomes new contributors via its Github and website, including testers and coders.  ... The value of various technical certifications is a subject of ongoing – or at least on-again, off-again – debate in the InfoSec community. But IT certifications, in general, remain a solid complementary career development component. Considering a DevSecOps-focused certification track is in itself a learning opportunity since any credential worth more than a passing glance should require some homework to attain.

How Medical Companies are Innovating Through Agile Practices

Within regulatory constraints, there is plenty of room for successful use of Agile and Lean principles, despite the lingering doubts of some in quality assurance or regulatory affairs. Agile teams in other industries have demonstrated that they can develop without any compromise to quality. Additional documentation is necessary in regulated work, but most of it can be automated and generated incrementally, which is a well-established Agile practice. Medical product companies are choosing multiple practices, from both Agile and Lean. Change leaders within the companies are combining those ideas with their own deep knowledge of their organization’s patterns and people. They’re finding creative ways to achieve business goals previously out of reach with traditional “big design up front” practices. ... Our goal here is to show how the same core principles in Agile and Lean played out in very different day-to-day actions at the companies we profiled, and how they drove significant business goals for each company.

The Importance of Developer Velocity and Engineering Processes

At its core, an organization is nothing more than a collection of moving parts. A combination of people and resources moving towards a common goal. Delivering on your objectives requires alignment at the highest levels - something that becomes increasingly difficult as companies scale. Growth increases team sizes creating more dependencies and communication channels within an organization. Collaboration and productivity issues can quickly arise in a fast-scaling environment. It has been observed that adding members to a team drives inefficiency with negligible benefits to team efficacy. This may sound counterintuitive but is a result of the creation of additional communication lines, which increases the chance of organizational misalignment. The addition of communication lines brought on by organization growth also increases the risk of issues related to transparency as teams can be unintentionally left “in the dark.” This effect is compounded if decision making is done on the fly, especially if multiple people are making decisions independent of each other.

Tired of AI? Let’s talk about CI.

Architectures become increasingly complex with each neuron. I suggest looking into how many parameters GPT-4 has ;). Now, you can imagine how many different architectures you can have with the infinite number of configurations. Of course, hardware limits our architecture size, but NVIDIA (and others) are scaling the hardware at an impressive pace. So far, we’ve only examined the computations that occur inside the network with established weights. Finding suitable weights is a difficult task, but luckily math tricks exist to optimize them. If you’re interested in the details, I encourage you to look up backpropagation. Backpropagation exploits the chain rule (from calculus) to optimize the weights. For the sake of this post, it’s not essential to understand how the learning of the weights, but it’s necessary to know backpropagation does it very well. But, it’s not without its caveats. As NNs learn, they optimize all of the weights relative to the data. However, the weights must first be defined — they must have some value. This begs the question, where do we start?

How do databases support AI algorithms?

Oracle has integrated AI routines into their databases in a number of ways, and the company offers a broad set of options in almost every corner of its stack. At the lowest levels, some developers, for instance, are running machine learning algorithms in the Python interpreter that’s built into Oracle’s database. There are also more integrated options like Oracle’s Machine Learning for R, a version that uses R to analyze data stored in Oracle’s databases. Many of the services are incorporated at higher levels — for example, as features for analysis in the data science tools or analytics. IBM also has a number of AI tools that are integrated with their various databases, and the company sometimes calls Db2 “the AI database.” At the lowest level, the database includes functions in its version of SQL to tackle common parts of building AI models, like linear regression. These can be threaded together into customized stored procedures for training. Many IBM AI tools, such as Watson Studio, are designed to connect directly to the database to speed model construction.

A Comprehensive Guide to Maximum Likelihood Estimation and Bayesian Estimation

An estimation function is a function that helps in estimating the parameters of any statistical model based on data that has random values. The estimation is a process of extracting parameters from the observation that are randomly distributed. In this article, we are going to have an overview of the two estimation functions – Maximum Likelihood Estimation and Bayesian Estimation. Before having an understanding of these two, we will try to understand the probability distribution on which both of these estimation functions are dependent. The major points to be discussed in this article are listed below. ... As the name suggests in statistics it is a method for estimating the parameters of an assumed probability distribution. Where the likelihood function measures the goodness of fit of a statistical model on data for given values of parameters. The estimation of parameters is done by maximizing the likelihood function so that the data we are using under the model can be more probable for the model.

DORA explorers see pandemic boost in numbers of 'elite' DevOps performers

DORA has now added a fifth metric, reliability, defined as the degree to which one "can keep promises and assertions about the software they operate." This is harder to measure, but nevertheless the research on which the report is based asked tech workers to self-assess their reliability. There was a correlation between reliability and the other performance metrics. According to the report, 26 per cent of those polled put themselves into the elite category, compared to 20 per cent in 2019, and seven per cent in 2018. Are higher performing techies more likely to respond to the survey? That seems likely, and self-assessment is also a flawed approach; but nevertheless it is an encouraging trend, presuming agreement that these metrics and survey methodology are reasonable. Much of the report reiterates conventional DevOps wisdom. NIST's characteristics of cloud computing [PDF] are found to be important. "What really matters is how teams implement their cloud services, not just that they are using cloud technologies," the researchers said, including things like on-demand self service for cloud resources.

Why Our Agile Journey Led Us to Ditch the Relational Database

Despite our developers having zero prior experience with MongoDB prior to our first release, they still were able to ship to production in eight weeks while eliminating more than 600 lines of code, coming in under time and budget. Pretty good, right? Additionally, the feedback provided was that the document data model helped eliminate the tedious work of data mapping and modeling they were used to from a relational database. This amounted to more time that our developers could allocate on high-priority projects. When we first began using MongoDB in summer 2017, we had two collections into production. A year later, that had grown into 120 collections deployed into production, writing 10 million documents daily. Now, each team was able to own its own dependency, have its own dedicated microservice and database leading to a single pipeline for application and database changes. These changes, along with the hours saved not spent refactoring our data model, allowed us to cut our deployment time to minutes, down from hours or even days.

Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik