Daily Tech Digest - October 09, 2020

Elusive hacker-for-hire group Bahamut linked to historical attack campaigns

According to BlackBerry, Bahamut relies heavily on manipulating its victims through a constantly shifting web of fake social media accounts and personas and even fake news websites and applications that don't appear to be malicious and often generate original content. This is meant to exploit the victims' interests and earn their trust. "First encounters with Bahamut begin innocently," the researchers said. "One might start with a simple direct message on Twitter or LinkedIn from an attractive woman, but with no suspicious link to click. Another might occur when scrolling through Twitter or Facebook in the form of a tech news article. Maybe you’d be taking a break at work and checking out a fitness website. Or perhaps you’re a supporter of Sikh rights looking for news about their movement for independence. You’d click, and nothing bad would appear to happen. On the contrary, you’d experience a legitimate, yet fabricated reality." One example is a technology news website that was at some point focused on mobile device reviews. At some point it was taken over by the group and the tone and nature of the articles changed to include security research and geopolitical themes.


Why MSPs Are Hacker Targets, and What To Do About It

Cyber defense doesn't come for free, and this is a significant challenge for MSPs. There are really only two places where an MSP can look to increase security standards for their end customers: The first is convincing the SMB to spend more on security, which is often a difficult upsell given already tight IT budgets. The second is to eat into their thin margins while still maintaining the ability to update defenses as needed by the threat landscape. The vast majority of cybersecurity defense solutions are purpose-built for the enterprise, bringing in a plethora of technology bells and whistles often too overwhelming or unnecessary for the SMB. All too often, there's chatter around cybersecurity proselytizing the merits of artificial intelligence, machine learning, and behavioral analytics — all of which come with high costs. The truth is, MSPs need solutions that cater to their specific needs, not just from a technical point of view but also financial and operational perspectives in order to get to the coveted 80/20. Small businesses have gained operational agility with the rise of the cloud and software-as-a-service, and with that, attackers have evolved to go after the lowest-hanging fruit.


4 common C programming mistakes — and 5 tips to avoid them

Allocated memory (done using the malloc function) isn’t automatically disposed of in C. It’s the programmer’s job to dispose of that memory when it’s no longer used. Fail to free up repeated memory requests, and you will end up with a memory leak. Try to use a region of memory that’s already been freed, and your program will crash—or, worse, will limp along and become vulnerable to an attack using that mechanism. Note that a memory leak should only describe situations where memory is supposed to be freed, but isn’t. If a program keeps allocating memory because the memory is actually needed and used for work, then its use of memory may be inefficient, but strictly speaking it’s not leakage. ... So why is the burden of checking an array’s bounds left to the programmer? In the official C specification, reading or writing an array beyond its boundaries is “undefined behavior,” meaning the spec has no say in what is supposed to happen. The compiler isn’t even required to complain about it. C has long favored giving power to the programmer even at their own risk. An out-of-bounds read or write typically isn’t trapped by the compiler, unless you specifically enable compiler options to guard against it.


Securing mobile devices, apps, and users should be every CIO’s top priority

The current distributed remote work environment has also triggered a new threat landscape, with malicious actors increasingly targeting mobile devices with phishing attacks. These attacks range from basic to sophisticated and are likely to succeed, with many employees unaware of how to identify and avoid a phishing attack. The study revealed that 43% of global employees are not sure what a phishing attack is. “Mobile devices are everywhere and have access to practically everything, yet most employees have inadequate mobile security measures in place, enabling hackers to have a heyday,” said Brian Foster, SVP Product Management, MobileIron. “Hackers know that people are using their loosely secured mobile devices more than ever before to access corporate data, and increasingly targeting them with phishing attacks. Every company needs to implement a mobile-centric security strategy that prioritizes user experience and enables employees to maintain maximum productivity on any device, anywhere, without compromising personal privacy.” The study found that four distinct employee personas have emerged in the everywhere enterprise as a result of lockdown, and mobile devices play a more critical role than ever before in ensuring productivity.


Here Comes the Internet of Plastic Things, No Batteries or Electronics Required

The researchers, who have been steadily working on the technology since their original paper, have leveraged mechanical motion to provide the power for their objects. For instance, when someone opens a detergent bottle, the mechanical motion of unscrewing the top provides the power for it to communicate data. “We translate this mechanical motion into changes in antenna reflections to communicate data,” said Gollakota. “Say there is a Wi-Fi transmitter sending signals. These signals reflect off the plastic object; we can control the amount of reflections arriving from this plastic object by modulating it with the mechanical motion.” To ensure that the plastic objects can reflect Wi-Fi signals, the researchers employ composite plastic filament materials with conductive properties. These take the form of plastic with copper and graphene filings. “These allow us to use off-the-shelf 3D printers to print these objects but also ensure that when there is an ambient Wi-Fi signal in the environment, these plastic objects can reflect them by designing an appropriate antenna using these composite plastics,” said Gollakota.


Robotic Process Automation Is Coming: Here Are 5 Ways To Prepare For It

Typically, the most suitable tasks for RPA relate to “busy work” – meaning any work that involves a great number of repetitive actions, such as opening and searching records, transferring data between different digital locations, and repetitive mouse clicks. These sorts of tasks are prime candidates for automation. At the other end of the scale, jobs that involve creative thought and human decision making generally are not suitable for automation. ... Just because something can be automated doesn't mean it should be. Here, you'll need to identify which – of all the things it's possible to automate – are your key priorities. I recommend focusing on those tasks that help your organization achieve its overall aims, but currently consume a disproportionate amount of employees’ time. It’s usually a good idea to go for “quick wins” first, as these will help to establish the usefulness of RPA while winning over minds that may be resistant to the idea of reducing repetitive workloads or fearful of what it could mean for jobs and organizational culture. ... Having decided on the best ways to deploy RPA in your organization, you can begin researching the technologies that are available and the potential partners you may need to work with to create a successful deployment.


Bridging India’s cybersecurity gender gap

While India produces roughly 1.5 million engineer graduates each year, less than 30% of them are women and too many find it hard to get jobs. Many of them are the products of little-known colleges where they gain limited technical skills and graduate with certificates that few potential employers recognize. At the same time, India’s cybersecurity industry is growing fast. By 2025 it is forecast to be worth USD 35 billion as governments, companies, and startups seek to safeguard data. The demand for skilled cybersecurity workers has soared accordingly, but women still only make up around 11% of the sector’s workforce, both in India and globally. Dhasmana and Vedashree decided two years ago to help bridge that gender gap by setting up CyberShikshaa, which in Hindi means ‘cyber education.’ “As a tech industry organization, Microsoft felt it was our responsibility to create very strong career pathways, especially for young women to join the technology sector,” says Dhasmana. DSCI’s Vedashree says there was a need to evangelize cybersecurity as a career option for new female grads. “So, we aligned our charters for skills development in cyber fields and women in security and crafted this program together.”


Accelerating Digital Transformation with Data, People and Technologies

The first priority in embarking on digital transformation is to make information more accessible to everyone across the organisation. A report from Harvard Business Review shows that 55% of organisations agreed that data analytics for decision-making is extremely important today, and 92% confirmed the increasing importance of data and analytics through 2020 and 2021. Yet, despite the rise in the value of data analytics in the current era, many organisations are burdened with outdated processes. According to IDC, 70% of an analyst’s time is spent searching for data, and 44% of data workers’ time results in unsuccessful attempts. Together with the lack of talented professionals to harness the true power of data, these struggles further refrain business leaders from creating a modern analytic environment for their organisation. With siloed data residing all over the place, one of the key challenges faced by companies is the advent of a variety of tools and platforms, which are either too complicated or lack sufficient training. These tools, therefore, set them back for truly achieving a digital transformation.


Emotet 101: How the Ransomware Works -- and Why It's So Darn Effective

Emotet exists in several different versions and incorporates a modular design. This makes it more difficult to identify and block. It uses social engineering techniques to gain entry into systems, and it is good at avoiding detection. What's more, Emotet campaigns are constantly evolving. Some versions steal banking credentials and highly sensitive enterprise data, which cybercrooks may threaten to release publicly. "This may serve as additional leverage to pay the ransom," Shier explains. An initial e-mail may look like it originated from a trusted source, such as a manager or top company executive, or it may offer a link to what appears to be a legitimate site or service. It usually relies on file compression techniques, such as ZIP, that spread the infection through various file formats, including .doc, docx, and .exe. This hides the actual file name as it moves around within a network. These documents may contain phrases such as "payment details" or "please update your human resources file" to trick recipients into activating payloads. Some messages have recently revolved around COVID-19. They often arrive from a legitimate e-mail address within the company — and they can include both benign and infected files. 


Your next next digital transformation project should look very different

Almost three-quarters (70%) of CIOs agree or strongly agree that the pandemic has increased the collaboration between the technology team and the business; more than half (52%) say it has created a culture of inclusivity within their teams, too. Yet while absence has made the heart grow fonder during the extreme conditions of social distancing, CIOs will have to work hard to ensure the close virtual bonds that have been fostered during lockdown are able to flourish when we return to something like normal working conditions.  To that end, Haake says her firm's research with KPMG suggests the most important factor for CIOs in the post-COVID age is strong cultural leadership. "That's what a good digital leader is going to have to keep an eye on if they want to be successful," she says. Pioneering CIOs will lead a cultural transformation that hones the capabilities of the IT team and intertwines these talents with the demands of the business. Digital leaders who build that tight bond will be much more likely to deliver timely tech solutions that really do change the business for the better.



Quote for the day:

"The greatest good you can do for another is not just share your riches, but reveal to them their own." - Benjamin Disraeli

Daily Tech Digest - October 08, 2020

What is DevOps? A guide to common methods and misconceptions

DevOps has been defined in many ways: a set of practices that automate and integrate processes so teams can build, test, and release software faster and more reliably; a combination of culture and tools that enable organizations to ship software at a higher velocity; a culture, a movement, or a philosophy. None of these are wrong, and they are all important aspects of DevOps—but they don’t quite fully capture what’s at the heart of DevOps: the essential human element between Dev and Ops teams, when collaboration bridges the gap that allows teams to ship better software, faster. For organizations, DevOps provides value by increasing software quality and stability, and shortening lead times to production. For developers, DevOps focuses on both automation and culture—it’s about how the work is done. But most importantly, DevOps is about enabling people to collaborate across roles to deliver value to end users quickly, safely, and reliably. Altogether, it’s a combination of focus, means, and expected results. The focus of DevOps is people. The means of implementing DevOps is process and tooling. The result of DevOps is a better product, delivered faster and more reliably.


You Don’t Need To Be A Mathematician To Master Quantum Computing

Don’t get me wrong. Math is a great way to describe technical concepts. Math is a concise yet precise language. Our natural languages, such as English, by contrast, are lengthy and imprecise. It takes a whole book full of natural language to explain a small collection of mathematical formulae. But most of us are far better at understanding natural language than math. We learn our mother tongue as a young child and we practice it every single day. We even dream in our natural language. I couldn’t tell if some fellows dream in math, though. For most of us, math is, at best, a foreign language. When we’re about to learn something new, it is easier for us if we use our mother tongue. It is hard enough to grasp the meaning of the new concept. If we’re taught in a foreign language, it is even harder. If not impossible. Of course, math is the native language of quantum mechanics and quantum computing, if you will. But why should we teach quantum computing only in its own language? Shouldn’t we try to explain it in a way more accessible to the learner? I’d say “absolutely”!


From DevOps to DevApps

Perhaps an easier way to think about event-driven is to think in terms of application flows. For example, when a trouble ticket is created in Zendesk, that data can be automatically analyzed by Amazon Comprehend to determine what the customer’s sentiment is (angry, satisfied, or confused). Then purchasing history, warranty information, and other pertinent information stored in a data warehouse like Amazon Redshift can be used to give the customer service rep a complete picture of the customer, to more expediently resolve any issues. One approach to using event-driven architecture utilizes the JAMStack tools, a term coined by Netlify founding CEO, Matt Billman. While WordPress is a platform that is used by an overwhelmingly large number of users deploying websites, the JAMStack is a collection of tools used to deliver web content. JAMStack tools can be used to deploy websites on the edge of the network, by reducing the number of database calls and bringing content closer to the user via CDN. However, you can also extend that stack by adding additional cloud native services, such as AuthO for authentication. In a web app that collects user data, information could be stored in Airtable.


Digital transformation: The new rules for getting projects done

"Amidst all the misery, this has been a great opportunity to fast-forward a lot of changes that were on the stocks anyway," says Copinger-Symes. "So I wouldn't want to say it's been positive, because that would undercut the tragedies out there, but I think we've adjusted in stride and there are a lot of opportunities to look out for, too." Like other organisations, the UK military has to put its five-year plan for tech-led change on the back-burner while it deals with the priorities of the pandemic. However, this change in emphasis has helped the organisation to reprioritise – and Copinger-Symes hopes the move away from a slower planning cycle is permanent, particularly when it comes to tech. "That has to change, because increasingly our competitiveness is found through the software not the hardware. And if you adopt decade-long planning cycles with software, you're not going to be very competitive," he says. "I think we were being forced to be change our planning to a much shorter loop, so I think this pandemic has accelerated that process. And I'm not saying we're on top of it or we've got it all right, but I think that's just another acceleration of where we were moving anyway – to that software-based view of the world, rather than a hardware view of the world."


Why AWS Recently Open Sourced A GUI Library For IoT Developers

“Developers can model the location and sizes of nodes, edges, and panels, and Diagram Maker renders these as elements on the Diagram Maker canvas. The rendered UI is fully interactive and lets users move nodes around, create new edges, or delete nodes or edges,” says the AWS team. Diagram Maker also gives developers the ability to layout a given graph via an API interface automatically. With this feature, application developers can visualise the relationships by having the layout-related information connected to the resources, even if they are built outside the editor. In addition, application developers can use Diagram Maker’s capabilities for use cases that are outside of IoT. For example, with Diagram Maker, application developers can improve the experience for end customers by letting them to intuitively and visually design cloud resources needed by cloud services like Infrastructure as Code (AWS CloudFormation) or Workflow Engines (AWS Step Functions) so as to figure out the various relationships and hierarchies, according to AWS. Alternatively, IoT application developers can utilise the Diagram Maker’s plugin interface to author reusable plugins which can extend the Diagram Maker’s core features.


Setting Up for Success: Governing Self-Service BI

With an increased number of users given access to the data layer, more reports and dashboards are generated to support business decisions, especially in the early stages of self-service BI adoption. When multiple individuals utilize the same data source at different times, it can lead to discrepancies in the data reported and redundant reports and dashboards generated from the same data set. Different business users also create their own versions of data sets derived from huge and more complex data sets. These activities, when compounded, will eventually lead to inconsistent reporting, which can set back executives making time-sensitive, data-driven business decisions. ... It’s not surprising how often and soon organizations run into performance issues with their self-service BI tools. Redundant data sets and reports can increase the load on systems, leading to capacity issues. Though some of the most powerful BI tools available provide best practices to improve report development, load testing, and capacity management, it still boils down to how end users are handling the technology in the absence of effective governance. ... An overloaded system, with redundant data sets and reports, may still have recourse, but when a security breach happens, it is one of the hardest setbacks that CIOs and organizations endure. 


The Changing Role of Data & the Chief Data Officer

In the short term, we’ve seen organizations increasingly focus on their data strategy. Data management has become a lot more important because organizations have to truly understand and trust their data. And especially for things like contact tracing, you have the right contact data. In that context, as part of a data coalition of my fellow CEOs, I wrote directly to Congress about the need for valid, reliable data to help us fight the pandemic in a much more thoughtful, data-driven way. We also worked very closely with one of the hardest hit states in the early days of the pandemic. They struggled because they didn’t have the necessary technology. They needed to get the right data quality to analyze health issues and figure out where the virus was spreading. We helped them leverage our technology to understand how to bring the right equipment—PPE, ventilators—to the right hospitals to the right patients at the right time. And just as innovative enterprises around the globe have leveraged data to transform themselves to serve their customers better and improve their products and services, we recommended that the government do the same. The government is the biggest employer in the US.


What CIOs need to know about hardening IT infrastructure

The good news is that infrastructure hardening technologies are readily available and can be added to existing environments, often with minimal disruption to production activities. However, to be extra prudent, it's important to test hardening products in a test environment -- if available -- to protect the integrity of production systems. ... In addition to the hardware and software tools that act as active frontline defense methods to hardening IT infrastructures, CIOs should consider establishing policies and procedures for infrastructure hardening. It may be that hardening activities are part of day-to-day IT operations, but it also makes sense to document these activities, especially if an IT audit is being planned. IT general controls (ITGCs) include numerous controls and metrics examined by IT auditors. Activities and initiatives mentioned above are among the ITGCs being audited. Key ITGCs include organization and management, communications, logical access security, physical and environmental security, change management, risk management, monitoring of controls, system operations, system availability, backup and recovery, incident management, and policies and procedures.


How Redis Simplifies Microservices Design Patterns

Microservice architecture continues to grow in popularity, yet it is widely misunderstood. While most conceptually agree that microservices should be fine-grained and business-oriented, there is often a lack of awareness regarding the architecture’s tradeoffs and complexity. For example, it’s common for DevOps architects to associate microservices to Kubernetes, or an application developer to boil implementation down to using Spring Boot. While these technologies are relevant, neither containers nor development frameworks can overcome microservice architecture pitfalls on their own — specifically at the data tier. Martin Fowler, Chris Richardson, and fellow thought-leaders have long addressed the trade-offs associated with microservice architecture and defined characteristics that guide successful implementations. These include the tenets of isolation, empowerment of autonomous teams, embracing eventual consistency, and infrastructure automation. While keeping with these tenets can avoid the pains felt by early adopters and DIYers, the complexity of incorporating them into an architecture amplifies the need for best practices and design patterns — especially as implementations scale to hundreds of microservices.


The End of the Privacy Shield Agreement Could Lead to Disaster for Hyperscale Cloud Providers

Companies like Amazon Web Services (AWS), Google, and Microsoft were initially happy that SCCs weren’t annulled. But they soon realized how the Privacy Shield ruling could have more adverse consequences for them in the long run. Soon all the three big shots issued statements in a bid to assure the customers that their clouds were still open, with Microsoft assuring their commercial or public sector customers that they could continue using Microsoft service without breaking the European law. However, a few privacy advocates were quick to point out that only those companies who continue to use SCCs can continue providing assurances about data protection from third-party surveillance that are either at rest or in transit. So several of these statements were misleading. Google, for instance, is an electronic communication service provider, as a result of which it falls under both categories. The platform may very well be the largest search engine. Yet, the increasing awareness of data security and privacy might force users to look for other reliable options that assure the more secure sharing of private information online.



Quote for the day:

"Start at the end. You can't tell a story unless you know how it ends." -- Lewis

Daily Tech Digest - October 07, 2020

Tips To Strengthen API Security

Plugging these above holes is a good start. However, these are only defensive measures against known patterns; they aren’t helping you keep guard for nuanced attack types. Ideally, security systems should stop cyberattacks before they even get started. Yet, Eliyahu recognizes a big gap in how APIs are monitored. Organizations “don’t have proper tools to know who is looking for vulnerabilities,” he said. “Tools are typically not that advanced. They mainly look for injections and known patterns.” Imagine a hacker probing an API. They will likely do so by trial and error, testing undocumented endpoints, sending malformed requests and so on. “They need to probe for hours or days before they find something,” said Eliyahu. If an AI could detect these odd attempts in minutes or seconds, IT could significantly reduce risk. Security solutions should thus leverage big data and AI to create baselines of typical behavior, then deter malicious activity the millisecond any nefarious probing begins. Eliyahu said such a security AI must consider dozens of behaviors such as, How is the API being accessed? What parameters are being used? What are the relationships between parameters? What is the flow of API calls? What type of data can be exposed?


UK, French, Belgian blanket spying systems ruled illegal by Europe’s top court

In layman’s terms that means that a government can’t build a massive database of what everyone does and then query it later while investigating a case. Instead, they will need to carry out targeted surveillance and data retention - identifying specific people or accounts or phone numbers - and have a court review those requests to make sure they are not overly broad. The ruling is significant because it directly addresses the issue of national security - something that has been used for years to bypass existing personal data protection legislation - and states categorically that EU privacy laws still apply in such circumstances, almost always. The decision includes a specific carve-out when it comes to national security, noting that “in situations where a Member State is facing a serious threat to national security that proves to be genuine and present or foreseeable, that Member State may derogate from the obligation to ensure the confidentiality of data relating to electronic communications by requiring, by way of legislative measures, the general and indiscriminate retention of that data for a period that is limited in time to what is strictly necessary, but which may be extended if the threat persists.”


New Flaws in Top Antivirus Software Could Make Computers More Vulnerable

According to a report published by CyberArk researcher Eran Shimony today and shared with The Hacker News, the high privileges often associated with anti-malware products render them more vulnerable to exploitation via file manipulation attacks, resulting in a scenario where malware gains elevated permissions on the system. The bugs impact a wide range of antivirus solutions, including those from Kaspersky, McAfee, Symantec, Fortinet, Check Point, Trend Micro, Avira, and Microsoft Defender, each of which has been fixed by the respective vendor. Chief among the flaws is the ability to delete files from arbitrary locations, allowing the attacker to delete any file in the system, as well as a file corruption vulnerability that permits a bad actor to eliminate the content of any file in the system. Per CyberArk, the bugs result from default DACLs (short for Discretionary Access Control Lists) for the "C:\ProgramData" folder of Windows, which are by applications to store data for standard users without requiring additional permissions. Given that every user has both write and delete permission on the base level of the directory, it raises the likelihood of a privilege escalation when a non-privileged process creates a new folder in "ProgramData" that could be later accessed by a privileged process.


How organizations can maintain a third-party risk management program from day one

First and foremost, we really took our time to hire subject matter experts in our industry. We’ve got lots of practitioners that have years and years and years of governance risk and compliance expertise. They’ve run third-party risk programs for some of the largest banks and financial institutions in the world. They’ve run risk programs at heavily regulated industries. Our people, first and foremost, is a huge differentiator. Number two, our products. It’s incredibly configurable, incredibly easy to use. But that’s such a common thing that folks claim. I actually like to say it’s easy to administrate. Some of the platforms that, if you will, we compete with. You can do those things, but you need to pay IT developers or other developers or even the company that you purchase the system from, to configure it for you. From our perspective, we like to empower our clients to really run the programs and configure the applications on their own. And so, from that perspective, I like to say, ease of administration. It’s also easy to use. First and foremost, not just for our clients, but for the vendors. So, think about it, if you’re an important vendor in a vertical like financial services, you’re getting a million of these questionnaires.


Game Development with .NET

All the .NET tools you are used to also work when making games. Visual Studio is a great IDE that works with all .NET game engines on Windows and macOS. It provides word-class debugging, AI-assisted code completion, code refactoring, and cleanup. In addition, it provides real-time collaboration and productivity tools for remote work. GitHub also provides all your DevOps needs. Host and review code, manage projects, and build software alongside 50 million developers with GitHub. The .NET game development ecosystem is rich. Some of the .NET game engines depend on foundational work done by the open-source community to create managed graphics APIs like SharpDX, SharpVulkan, Vulkan.NET, and Veldrid. Xamarin also enables using platform native features on iOS and Android. Beyond the .NET community, each game engine also has their own community and user groups you can join and interact with. .NET is an open-source platform with over 60,000+ contributors. It’s free and a solid stable base for all your current and future game development needs. Head to our new Game Development with .NET site to get an overview of what .NET provides for you when making games. If you never used Unity, get started with our step-by-step Unity get-started tutorial and script with C# as quick as possible.


Suspected Chinese Hackers Unleash Malware That Can Survive OS Reinstalls

The company discovered the UEFI-based malware on machines belonging to two victims. It works to create a Trojan file called "IntelUpdate.exe" in the Startup Folder, which will reinstall itself even if the user finds it and deletes it. "Since this logic is executed from the SPI flash, there is no way to avoid this process other than eliminating the malicious firmware," Kaspersky Lab said. The malware's goal is to deliver other hacking tools on the victim’s computer, including a document stealer, which will fetch files from the “Recent Documents” directory before uploading them to the hacker’s command and control server. Kaspersky Lab refrained from naming the victims, but said the culprits have been going after computers belonging to “diplomatic entities and NGOs in Africa, Asia, and Europe.” All the victims have some connection to North Korea, be it through non-profit activities or an actual presence in the country. While looking over the malware’s computer code, Kaspersky Lab also noticed the processes can reach out to a command and control server previously tied to a suspected Chinese state-sponsored hacking group known as Winnti. In addition, the security firm found evidence the creators behind the malware used the Chinese language while programming the code.


Q&A on the Book Infinite Gamification

There are two types of gaming to consider here - either they are cheating or they have found a cheap way to score points, a loophole in our program design. For cheats, the best way to deal with this is to have a clear set of rules and principles you expect players to follow, then if you find someone cheating you can call them up and explain they are not acting according to the stated rules of the program. In most cases, the person will desist but sometimes you do need to enforce the ultimate sanction of kicking them off the program. The second type of gaming, finding cheap ways to score points, is for you to fix. The principle here is “don’t blame the gamer, blame the game”. There are lots of techniques you can do - making that activity less valuable, capping the number of points they can earn with that activity, and so on. The book lists these. In order to do this though you need to have framed your program as one that will iterate over time. Too many gamification programs are launched as if these are the final rules and nothing can change - this is a recipe for disaster; most programs aren’t right the first time around. Human nature being what it is, by leaving room to evolve the program, you give yourself the flexibility to get it right over time.


How to Survive a Crisis with AI-Driven Operations

As an enterprise turns to AI during a crisis -- whether for predictive sales modelling or automating customer-center operations -- leaders must prioritize developing employees’ core competencies around AI. Employees skilled in AI will be of course be needed to develop and operate the new automation advancements, but the benefit extends beyond this. AI-skilled employees can be tapped to create a roadmap on how to best leverage the technology to drive business value in times of crisis. Organizations should consider developing internal reskilling and upskilling programs or using third party learning platforms to help employees develop AI specializations. Employees can also be instrumental in galvanizing coworkers to readily adopt new AI technology, accelerating adoption rates as an organization looks to quickly scale up the technology across the business to adjust operations in response to a crisis. Enterprises need a clear data strategy around data governance in order to scale up AI quickly and successfully. Ensuring they have a clear set of repeatable protocols and methodologies in place to help them execute that strategy effectively is critical, so leaders don’t have to worry about compliance as they scale up AI in the face of a crisis.


5 blockchain use cases in finance that show value

Financial institutions traditionally work as intermediaries moving payments between different entities, which involves complex and time-consuming processes that add friction into transactions. Blockchain can streamline these processes -- notably reconciliation as well as clearing and settlement -- by removing the friction, thereby reducing the time and cost that financial institutions incur. For example, in April 2020 European financial technology company SIA launched a blockchain infrastructure to enable the Spunta Banca DLT, a private permissioned distributed ledger technology-based project for interbank reconciliation that is promoted by Italian Banking Association (ABI) and coordinated and implemented by ABI Lab, a banking research and innovation center. "The reconciliation process for interbank transactions in Italy -- formerly governed by the spunta process -- has been notoriously complex," said Charley Cooper, managing director at R3, an enterprise blockchain technology company. "With multiple parties involved, the task of identifying and addressing inconsistencies has historically been hampered by a lack of standardization, the use of piecemeal and fragmented communication methods and no single version of the truth," he added.


Cloud data management – the post-Covid future of data protection for MSPs

The dynamic changes in 2020 have emphasized just how much MSPs need to be on the front foot with innovative data management solutions. And those that are pivoting to cloud data management are seeing both a boost in their revenue, and an ability to Covid-proof operations. After all, customers with on-site solutions may not be able to get an engineer visit in person. Companies are shrinking, or growing, rapidly, and need to be able to scale up or down accordingly – without hitting the bottom line. And for remote users the expectation is that they can work wherever they need to, whenever they need to. The only way MSPs can help companies meet these challenges is with cloud data management. ... Unify complex data: With a one-stop, cloud-data management platform, MSPs can stream customers' backup, archive and DR data, while offering invaluable insight into entire data estates. This enables them to gain borderless visibility of all critical data, structured and unstructured - from a single control center in real time. Importantly this includes Microsoft 365 and G Suite data. Eliminate downtime: Modern solutions now instantly restore individual files or whole systems, using user-driven recovery methods. 



Quote for the day:

"Distinguished leaders impress, inspire and invest in other leaders." -- Anyaele Sam Chiyson

Daily Tech Digest - October 06, 2020

What is Blockchain as a Service (BaaS) in the Tech Industry?

Blockchain is becoming more and more popular not just in Cryptocurrency but in the financial transactions where security and transparency is a must. However, it is very expensive and technologically complicated to create, maintain, and operate a blockchain. That is why many smaller and mid-level companies are hesitant to invest fully in blockchain even though its advantages are obvious. However, Blockchain as a Service can easily resolve this problem. This is based on the Software as a Service (SaaS) model where a company specifically invests in creating, maintaining, and operating a blockchain. This company can then offer the advantages of blockchain to other companies as a service while charging a fee. They can offer blockchain on any of the available distributed ledgers like Ethereum, Bitcoin, R3 Corda, Hyperledger Fabric, Quorum, etc. along with the peripheral services such as system security, bandwidth management, resource optimization, etc. In this way, many smaller and mid-level companies who don’t want to build and maintain their own blockchain systems from scratch can still obtain the advantages of blockchain for a nominal fee. These companies can focus on their core business and obtain value addition from the blockchain without needing to become experts in the technology.


How companies can overcome the content processing drawbacks of RPA

While the need to enlist assistance from additional software is valid, organisations must be careful about overspending, and ensure that the tools they invest in are for a clear, specific purpose. ... “There’s a couple of different ways for customers to overcome these shortcomings. One is to buy a tailored point solution like an OCR tool, which can extract data from documents, or they could invest in a workflow tool to help them orchestrate robots and humans, or perhaps buy some machine learning from Google to try and extract insights from their complex documents. These tools are designed to solve a very narrow set of problems, within tight parameters. “However, each of these has its own technical challenges; when embarking on one of these projects, you face significant cost, plus you need the right skills and tech to support each initiative. Each use case needs to be treated as an individual project, because you’re effectively buying for that particular need, and if you have lots of different types of data in your organisation, lots of different processes that have this level of unstructured data, you need to start again each time and buy the right solution to fix each individual problem.


Red Hat Envisions Linux Operating System As More Than ‘Just A Commodity’

Enterprise Linux company Red Hat has wanted users to think more of their operating ‘engines’ for some time now, long before the company’s acquisition and integration into the IBM family back in 2018. The company released its Red Hat Enterprise Linux 7 software back in June 2014 and followed up with Red Hat Enterprise Linux 8 in May last year. Known affectionately among the developer cognoscenti as RHEL (pronounced ‘rel’, as in relate, relish or relax), Red Hat has been building its software to specifically align to cloud-native computing, containers (a way of breaking application functions into smaller discrete blocks) and all forms of automation and AI-fuelled autonomous computing. Underpinning all the individual functions that it puts into its enterprise operating system is a desire for departments, teams and individual users to consider the OS as a performance vehicle in and of itself i.e. something more than just a commodity engine. If that sounds like marketing spin, then it probably is… so can the company substantiate any of that gloss and explain how the engine in your computer system might actually change the way we work?


T2 security chip on Macs can be hacked to plant malware; cannot be patched

The attack requires combining two other exploits that were initially used for jailbreaking iOS devices — namely Checkm8 and Blackbird. This works because of some shared hardware and software features between T2 chips and iPhones and their underlying hardware. According to a post from Belgian security firm ironPeak, jailbreaking a T2 security chip involves connecting to a Mac/MacBook via USB-C and running version 0.11.0 of the Checkra1n jailbreaking software during the Mac’s boot-up process. Per ironPeak, this works because “Apple left a debugging interface open in the T2 security chip shipping to customers, allowing anyone to enter Device Firmware Update (DFU) mode without authentication.” “Using this method, it is possible to create an USB-C cable that can automatically exploit your macOS device on boot,” ironPeak said. This allows an attacker to get root access on the T2 chip and modify and take control of anything running on the targeted device, even recovering encrypted data […] The danger regarding this new jailbreaking technique is pretty obvious. Any Mac or MacBook left unattended can be hacked by someone who can connect a USB-C cable, reboot the device, and then run Checkra1n 0.11.0.


Classifying Your Third Parties: An Essential Third Party Due Diligence First Step

Of course, this brings us to ask when a company “knows” that a third party will make an improper payment. Under the FCPA, a person has the requisite knowledge to be liable when he or she is aware of the potential wrongdoing, cognizant of a high probability of the existence of such wrongdoing, or intentionally ignorant of the potential wrongdoing. In other words, Congress did not want to allow people to “sneak around” the FCPA by using a third party. As Congress made clear, it meant to impose liability not only on those with actual knowledge of wrongdoing, but also on those who purposefully avoid actual knowledge: [T]he so-called “head-in-the-sand” problem – variously described in the pertinent authorities as “conscious disregard,” “willful blindness” or “deliberate ignorance” – should be covered so that management officials could not take refuge from the Act’s prohibitions by their unwarranted obliviousness to any action (or inaction), language or other “signaling device” that should reasonably alert them of the “high probability” of an FCPA violation.”


People-focused digital transformation: What benefit does it have for your employees?

“Digitally mature” companies, where leadership teams are proactively jumping on and implementing digital trends, are increasingly becoming a must-have for job-seekers. From attracting to retaining talent, organizations that are pioneering a digital strategy for their processes, efficiently using technology and adapting in line with digital, will undoubtedly see more success than organizations that don’t. The focus is no longer just on what an employee can bring to a company but also on what the company can deliver to the employee to develop their skill sets in preparation for the next step of their career. And, with research revealing that the benefits of a digital-first company include improved operational efficiencies as well as having a faster time to market, it’s clear why a prospective employee would opt for a digitally transformed company over one that still runs with mostly manual processes. Factors such as remote working, the use of technology to improve productivity and developing skills away from an office-based environment can lead to people enjoying their jobs more.


New ransomware vaccine kills programs wiping Windows shadow volumes

This weekend, security researcher Florian Roth released the 'Raccine' ransomware vaccine that will monitor for the deletion of shadow volume copies using the vssadmin.exe command. "We see ransomware delete all shadow copies using vssadmin pretty often. What if we could just intercept that request and kill the invoking process? Let's try to create a simple vaccine," Raccine's GitHub page explains. Raccine works by registering the raccine.exe executable as a debugger for vssadmin.exe using the Image File Execution Options Windows registry key. Once raccine.exe is registered as a debugger, every time vssadmin.exe is executed, it will also launch Raccine, which will check to see if vssadmin is trying to delete shadow copies. If it detects a process is using 'vssadmin delete' or 'vssadmin resize shadowstorage' it will automatically terminate the process, which is usually done before ransomware begins encrypting files on a computer. It should also be noted that Raccine may terminate legitimate software that uses vssadmin.exe as part of their backup routines. Roth plans on adding the ability to allow certain programs to bypass Raccine in the future so that they are not mistakenly terminated.


The Abyss of Ignorable: A Route into Chaos Testing from Starling Bank

Imagine if every abstraction came with a divinely guaranteed SLA. (They don’t.) Every class and method call, every library and dependency. Pretend that the SLA is a simple percentage. (They never are.) There are some SLAs (100%, fifty nines) for which it would be wrong to even contemplate failure let alone handle it or test for it. The seconds you spent thinking about it would already be worth more than the expected loss from failure. In such a world you would still code on the assumption that there are no compiler bugs, JVM bugs, CPU instruction bugs - at least until such things were found. On the other hand there are SLAs (95%, 99.9%) for which, at reasonable workloads, failure is effectively guaranteed. So you handle them, test for them and your diligence is rewarded. We get our behaviour in these cases right. We rightly dismiss the absurd and handle the mundane. However, human judgement fails quite badly when it comes to unlikely events. And when the cost of handling unlikely events (in terms of complication) looks unpleasant, our intuition tends to reinforce our laziness. A system does not have to be turbulent or complex to expose this. 


Announcing third-party code scanning tools: static analysis & developer security training

Code scanning is a developer-first, GitHub-native approach to easily find security vulnerabilities before they reach production. Code scanning is powered by GitHub’s CodeQL static scanning engine and is extensible to include third-party security tools. Extensibility provides a lot of flexibility and customizability for teams while maintaining the same user experience for developers. This capability is especially helpful if you: Work at a large organization that’s grown through acquisitions and has teams running different code scanning tools; Need additional coverage for specific areas such as mobile, Salesforce development, or mainframe development; Need customized reporting or dashboarding services; Or simply want to use your preferred tools while benefiting from a single-user experience and single API. What makes this possible is GitHub code scanning’s API endpoint that can ingest scan results from third-party tools using the open standard Static Analysis Results Interchange Format (SARIF). Third-party code scanning tools are initiated with a GitHub Action or a GitHub App based on an event in GitHub, like a pull request. 


It's Not Magic, It's Elastic: Getting Digital Transformation Right

Covid-19 battered many sectors, and the restaurant industry was certainly near the top of the list. Yet while lockdowns and contagion fears cratered restaurant sales in the second quarter of 2020, fast-casual chain and PwC customer Chipotle’s revenue only fell a modest 4.8%. How did they pull that off? By growing digital sales by 216%. By July, the company’s sales were rising again. Digital sales still continued to rise, too. They provided nearly half of Chipotle’s July sales. This is elasticity — a quick pivot to digital sales, then keeping that online revenue growing even as in-person purchases pick up again. Another fast-casual chain, Panera, also pivoted fast during the epidemic’s peak. While on-site dining was shut down, Panera stores sold groceries and offered them for curbside pickup. Or consider lodging, another sector that the epidemic hit especially hard. Red Roof Inns seemed to realize that their “essential” offering was private space with WiFi — so they started offering day rates to people who wanted to work from anywhere but home. These companies were elastic because they built out their digital infrastructure.



Quote for the day:

"If you want people to to think, give them intent, not instruction." -- David Marquet

Daily Tech Digest - October 05, 2020

Egregor Ransomware Adds to Data Leak Trend

As with other ransomware gangs, such as Maze and Sodinokibi, the operators behind the Egregor ransomware are threatening to leak victims' data if the ransom demands are not met within three days, according to an Appgate alert. The cybercriminals linked to Egregor are also taking a page from the Maze playbook, creating a "news" site on the darknet that offers a list of victims that have been targeted and updates about when stolen and encrypted data will be released, according to the alert. "Egregors' ransom note also says that aside from decrypting all the files in the event the company pays the ransom, they will also provide recommendations for securing the company's network, 'helping' them to avoid being breached again, acting as some sort of "black hat pentest team," according to Appgate. It's not clear how much ransom the operators behind Egregor are demanding or if any data has been leaked, according to Appgate. A copy of one ransom note posted online notes the cybercriminals plan to release stolen data through what they call "mass media." While Appgate released an alert to customers on Friday, the Egregor ransomware variant was first spotted in mid-September by several independent security researchers, including Michael Gillespie, who posted samples of the ransom note on Twitter.


Five reasons why Scrum is not helping in getting twice the work done in half the time

Do you measure the velocity of the team? Do you calculate how long a person was busy doing something? Do you measure estimated time for a task vs. actual time spent? Or measuring things like defects per story, defect removal efficiency and code coverage, etc. It is not that the above is harmful as long it is used for the right purposes like velocity for forecasting and code coverage for quality of code. But it makes more sense to measure time to market, customer satisfaction, NPS, usages index, response time, and innovation rate. If you were releasing once a year and now releasing every quarter, you have already improved by 400%, but would you like to stick here? Look at how much time your team takes from development to deployment in production? ... We wanted people to reach faster by driving faster. We taught them how to drive, manage traffic well, and put instructions everywhere, but people are still not going above 40 KM an hour. Although it has improved the overall time as there are fewer troubles while driving. When checked, people complained about 20 years old car that they have been driving. We have a similar story to our team.


How technology will shape the future of the workplace

Organisations often find it challenging to carry out business transformation projects successfully — and shaping the future of the workplace is no different. While there may be a willingness to change, there are many ways that change projects become stuck in the mire, their momentum stalled by hundreds of micro-actions taken (and not taken) throughout the organisation. The pandemic changed things. Businesses have learned that a major change project that would normally have taken six months to a year — such as enabling everyone to work remotely — can be done much faster. Necessity is indeed the mother of invention; innovation happens when people and organisations realise they have to act fast to stay competitive. ... As virtual working becomes less novel, more businesses will explore ways that they can support their employees and keep the team working efficiently. We’ll also start to see a re-evaluation of what working means. The days when it was defined by who sat at their desk the longest had already started to wane before the pandemic hit. Now, with the freedom to be creative that lockdown granted business leaders, companies are starting to look beyond hours worked and things produced and towards the quality of that work and the effect it has on the goals of the business.


Data Management skills

Nowadays, the digital transformation is actually about applying a data-driven approach to every aspect of the business in an effort to create a competitive advantage. That's why more and more companies want to build their own data lake solutions. This trend is still continuing and those skills are still in need. The most popular tools here are still HDFS for the on-prem solution and cloud data storage solutions from AWS, GCP, and Azure. Aside from that, there are also some data platforms that are trying to fill several niches and create integrated solutions, for example, Cloudera, Apache Hudi, Delta Lake. ... There are Data Warehouses where the information is sorted, ordered, and presented in the form of final conclusions(the rest is discarded), and Data Lakes — "dump everything here, because you never know what will be useful". Data Hub is focused on those who do not belong to either the first or the second category. The Data Hub architecture allows you to leave your data where it is, providing centralization of the processing but not the storage. The data is searched and accessed right where it is located at the moment. But, because the Data Hub is planned and managed, organizations must invest significant time and energy determining what their data means, where it comes from and what transformations it must complete before it can be put into the Data Hub.


These 10 tech predictions could mean huge changes ahead

According to Ashenden, the need to support creativity and innovation is urgent for businesses in the current context. As a result, the tools that enable collaboration are getting a huge boost – and not a short-term one. "Those areas will become much more central going forward," she said. "A lot of work processes that once relied on face-to-face have gone digital now, and that won't go back. Even when people are back in the office – once these things live in a digital world, that's where they live." Connectivity, according to CCS Insights, will also change as a result of the switch to remote work. From next year, the firm expects network operators to offer dedicated "work from home" packages to businesses, differentiating between corporate and personal usage, so that employers can provide staff with appropriate services such as security, collaboration tools and IT support. Operators will also increase their focus on connectivity in suburban zones, rather than city centers, as the workforce becomes increasingly established outside of the office. And as connectivity becomes ever-more important, the research firm predicts that the next three years will be rocked by governments' actions to better protect their national telecom infrastructure.


Improving Webassembly and Its Tooling -- Q&A with Wasmtime’s Nick Fitzgerald

It’s about discovering otherwise hidden and hard-to-find bugs. There’s a ton that we miss with basic unit testing, where we write out some fixed set of inputs and assert that our program produces the expected output. We overlook some code paths or we fail to exercise certain program states. The reliability of our software suffers. We are fallible, but at least we can recognize our limitations and compensate for them. Testing pseudo-random inputs helps us avoid our own biases by feeding our system “unexpected” inputs. It helps us find integer overflow bugs or pathological inputs that allow (untrusted and potentially hostile) users to trigger out-of-memory bugs or timeouts that could be leveraged as part of a denial of service attack. Some people are familiar with testing pseudo-random inputs via “property-based testing” where you assert that some property always holds and then the testing framework tries to find inputs where your invariant is violated. For example, if you are implementing the reverse method for an array, you might assert the property that reversing an array twice is identical to the original array.


7 Essentials of Digital Transformation Success

Consumers have come to expect organizations to use their personal information to create custom solutions. Especially during the pandemic, consumers have become accustomed to the benefits of Netflix and Spotify using machine learning for entertainment recommendations, Zoom using just a couple clicks to create video engagement, and Google Home or Amazon Alexa using voice for everything from answering inquiries to simplifying shopping. These same consumers expect their bank or credit union to use their relationship date, behaviors and preferences the same way … or better. But, advanced analytics and AI should not be a goal in and of itself. These tools should be used to support broader strategies. According to Wharton, “Instead of exhaustively looking for all the areas AI could fit in, a better approach would be for companies to analyze existing goals and challenges with a close eye for the problems that AI is uniquely equipped to solve.” Some solutions include everything from fraud detection to facilitating predictive solution recommendations for customers. Now more than ever, AI needs to be used to deliver human-like intelligence across the entire organization.


Inadequate skills and employee burnout are the biggest barriers to digital transformation

The ongoing disruption of the pandemic has shown how important it can be for businesses to be built for change. Many executives are facing demand fluctuations, new challenges to support employees working remotely and requirements to cut costs. In addition, the study reveals that the majority of organizations are making permanent changes to their organizational strategy. For instance, 94% of executives surveyed plan to participate in platform-based business models by 2022, and many reported they will increase participation in ecosystems and partner networks. Executing these new strategies may require a more scalable and flexible IT infrastructure. Executives are already anticipating this: the survey showed respondents plan a 20 percentage point increase in prioritization of cloud technology in the next two years. What’s more, executives surveyed plan to move more of their business functions to the cloud over the next two years, with customer engagement and marketing being the top two cloudified functions. COVID-19 has disrupted critical workflows and processes at the heart of many organizations’ core operations. Technologies like AI, automation and cybersecurity that could help make workflows more intelligent, responsive and secure are increasing in priority across the board for responding global executives.


Is Cloud Migration a Path to Carbon Footprint Reduction?

Energy efficiency with an enterprise may go hand in hand with other organizational traits, according to the report. Accenture’s research from 2013 to 2019 found that companies that consistently earned high marks on environmental, social, and governance performance also saw operating margins 4.7x higher than organizations with lower performance in those areas. There were also indications of higher annual returns to shareholders among those environmentally minded enterprises. In addition to the potential benefit cloud migration presents for the environment, Accenture’s report shows there can be total cost of ownership savings of up to 30-40% when organizations migrate to more cost-efficient public clouds. The report also shed light on how cloud migration affected Accenture’s expenses. The firm runs 95% of its applications in the cloud, the report says. After its third year of migration, Accenture saw $14.5 million in benefits, plus another $3 million in annualized costs saved by right sizing its service consumption. Moving to the cloud might not mean much in terms of cutting energy consumption if the service provider does not take steps to be more energy efficient.


Neuromorphic computing could solve the tech industry's looming crisis

Rather than separate out the memory and computing like most chips in use today, neuromorphic hardware keeps both together, with processors having their own local memory -- a more brain-like arrangement -- that saves energy and speeds up processing. Neuromorphic computing could also help spawn a new wave of artificial intelligence (AI) applications. Current AI is usually narrow and developed by learning from stored data, developing and refining algorithms until they reliably match a particular outcome. Using neuromorphic tech's brain-like strategies, however, could allow AI to take on new tasks. Because neuromorphic systems can work like the human brain -- able to cope with uncertainty, adapt, and use messy, confusing data from the real world -- it could lay the foundations for AIs to become more general. "The more brain-like workloads approximate computing, where there's more fuzzy associations that are in play -- this rapid adaptive behaviour of learning and self modifying the programme, so to speak. These are types of functions that conventional computing is not so efficient at and so we were looking for new architectures that can provide breakthroughs," says Mike Davies



Quote for the day:

"It's not the position that makes the leader. It's the leader that makes the position." -- Stanley Huffty