Daily Tech Digest - August 26, 2021

New Passwordless Verification API Uses SIM Security for Zero Trust Remote Access

On the spectrum between passwords and biometrics lies the possession factor – most commonly the mobile phone. That's how SMS OTP and authenticator apps came about, but these come with fraud risk, usability issues, and are no longer the best solution. The simpler, stronger solution to verification has been with us all along – using the strong security of the SIM card that is in every mobile phone. Mobile networks authenticate customers all the time to allow calls and data. The SIM card uses advanced cryptographic security, and is an established form of real-time verification that doesn't need any separate apps or hardware tokens. However, the real magic of SIM-based authentication is that it requires no user action. It's there already. Now, APIs by tru.ID open up SIM-based network authentication for developers to build frictionless, yet secure verification experiences. Any concerns over privacy are alleviated by the fact that tru.ID does not process personally identifiable information between the network and the APIs. It's purely a URL-based lookup.


Cognitive AI meet IoT: A Match Made in Heaven

The progressive trends of Mobile edge computing and Cloudlets are diffusing edge-based intelligence in connected and more controlled enterprise systems. However, within the diversity of pervasive cyber-physical ecosystems, the autonomy of the discrete edge nodes would require gain in operational intelligence with minimum supervision. The emerging innovation in cognitive computational intelligence is revealing a great potential to introduce a contemporary soft computing-based algorithm, architectural rethinking, and progressive system design of the next generation of IoT systems. The cognitive IoT Systems crush the strong partition between the silos and interdependencies of software and hardware subsystems. The flexibility of the edge-native AI component is flexible enough to recognize the changes in the physical environment and dynamically adjust the analytical outcomes in real-time. As a result, the interaction between human-machine or machine to machine becomes more dynamic, interoperable, and contextual to the time and scope of any operation.


How the pandemic delivered the future of corporate cybersecurity faster

At some point it becomes untenable and inefficient to manage all these separate solutions. That point gets closer every day as teams have to deal with the complexities and identity management challenges of remote work. Siloed solutions also mean IT staff must monitor several different consoles and may not connect the dots when incidents are flagged on separate platforms. They also require complex and costly integration projects to get the functionality needed. And even then, they’ll likely still require manual oversight. Moving toward all-in-one security solutions can help replicate the sense of cohesion that once existed in on-premises network security along with new efficiencies. All-in-one solutions can share data across the different components, leading to better and more efficient function. And by adding new modules instead of products when new tools are needs, you eliminate the expense and complications of integration. Companies and individuals have already gotten used to paying for things like data, cloud storage and web hosting based on how much they use them.


OnePercent ransomware group hits companies via IceID banking Trojan

The OnePercent group's ransom note directs victims to a website hosted on the Tor anonymity network where they can see the ransom amount and contact the attackers via a live chat feature. The note also includes a Bitcoin address where the ransom must be paid. If victims do not pay or contact the attackers within one week, the group attempts to contact them via phone calls and emails sent from ProtonMail addresses. "The actors will persistently demand to speak with a victim company’s designated negotiator or otherwise threaten to publish the stolen data," the FBI said. "When a victim company does not respond, the actors send subsequent threats to publish the victim company’s stolen data via the same ProtonMail email address." The extortion has different levels. If the victim does not agree to pay the ransom quickly, the group threatens to release a portion of the data publicly and if the ransom is not paid even after this, the attackers threaten to sell the data to the REvil/Sodinokibi group to be auctioned off. Aside from the REvil connection, OnePercent might have been tied to other ransomware-as-a-service (RaaS) operations in the past too.


Why Agile Transformations Fail In The Corporate Environment

One key reason an Agile transformation will fail is when all the focus is concentrated in just one of the three circles above. It is imperative that we consider these three circles like a Venn diagram and regularly monitor our operating presence. Ideally, we want to operate in all three circles, but it is hard to find balance. Suppose we are working in the mindset and framework circles and trying to build a perfect product with perfect architecture. Spending too much time making things perfect, we are likely to miss the market window, or run into financial difficulties. Similarly, if we operate in the mindset and business agility circles, for example, it could be great for the short term to get a prototype to market quickly, but we will be drowning in technical debt in the long run. Or, imagine that we operate in the framework and business agility circle to build a perfect hotel for our customers — we could miss the fact that they really need a bed-and-breakfast, not a hotel, by not considering the mindset circle. All three perspectives are essential, so to maximize the efficiencies, we need to keep finding the balance.


How the tech sector can provide opportunities and address skills gaps in young people

After all, as far as technology is concerned, none of us are beyond the need for further training and development. The McKinsey Global Institute has recently suggested that as many as 357 million people will need to acquire new skills in the next decade due to the predicted rise of artificial intelligence and automation – skills that few, even in tech-adjacent industries, currently possess. Keeping this kind of projection firmly in mind helps us to remember that the acquisition of new and essential skills is an ongoing process for everyone. As such, employers should not discount those potential candidates who don’t necessarily come from a tech-heavy background. With robust on-the-job training processes and a supportive, inclusive approach towards IT talent, young workers who perhaps missed out on IT fundamentals at school or who chose to focus, for example, on humanities-based university courses can absolutely receive the same attention and prospects as those from a tech-heavy background. A recent government report on aspects of the skills gap has already uncovered an uplifting trend in this direction, with 57% of employers confident that they can find resources to train their employees.


A closer look at two newly announced Intel chips

Intel’s upcoming next-generation Xeon is codenamed Sapphire Rapids and promises a radical new design and gains in performance. One of its key differentiators is its modular SoC design. The chip has multiple tiles that appears to the system as a monolithic CPU and all of the tiles communicate with each other, so every thread has full access to all resources on all tiles. In a way it’s similar to the chiplet design AMD uses in its Epyc processor. By breaking the monolithic chip up into smaller pieces it’s easier to manufacture. In addition to faster/wider cores and interconnects, Sapphire Rapids has a new feature called Last Level Cache (LLC) that features up to 100MB of cache that can be shared across all cores, with up to four memory controllers and eight memory channels of DDR5 memory, next-gen Optane Persistent Memory, and/or High Bandwidth Memory (HBM). Sapphire Rapids also offers Intel Ultra Path Interconnect 2.0 (UPI), a CPU interconnect used for multi-socket communication. UPI 2.0 features four UPI links per processor with 16GT/s of throughput and supports up to eight sockets.


How to encourage healthy conflict: 8 tips from CIOs

We unpack ideas and differences, seeking to understand each other’s points of view and the experiential lens through which the issue(s) are being evaluated, and then work collaboratively in the spirit of best serving our customers (external and internal) to reach the best decision and path to resolution. In the end, and most importantly, we are a team; so, when we work through the conflict and land on a course of action or decision, we all align, rally, and go into full-on execution mode as one team, with one agenda. Recognize that each team member brings a unique set of experiences, ideas, and beliefs to every conversation and decision. As a leader, you need to be acutely aware of when and how team members engage in conflict and the behaviors that precede and follow such discussions. Encourage team members to participate and share their ideas; candidly and directly elicit their honest and important views on the matters, even when the topics may be challenging and the conflict intense, and especially if the team member may be more quiet or prone to avoid the heat of the debate. 


The Office Of Strategy In the Age Of Agility

Agile methods such as scrum, kanban and lean development have gone beyond the realm of product design and development to other organizational functions, such as customer engagement, employee motivation, and execution amid uncertainty. From the earliest Agile Manifesto, what we know are the following principles: 1) people over process and tools, 2) working prototypes over excessive documentation, 3) respond to change rather than follow a plan, and 4) customer collaboration over rigid contracts. However, in the realm of strategy, agile is often confused with adhocism, and that it would lead to more chaos than value. But as Jeff Bezos instructs us, when making strategy, one must focus on the long term, the things that will remain largely constant over time. In the case of Amazon, the strategy is three-fold: customer obsession, invention, and being patient, and that for customers what matters is greater speed, wider selection, and lower cost. With so few strategic priorities, how does the company manage to remain relevant? 


Microservice Architecture and Agile Teams

As services can be worked on in parallel, a team can bring more developers to bear on a problem without them getting into each other’s actions. It can also be simpler for those developers to understand their part of the system, as they can focus their concern on just one part of it. Process isolation also causes it feasible for us to alter the technology choices team makes, perhaps mixing different programming languages, programming styles, deployment platforms, or databases to discover the perfect blend. Microservice architecture does allow the team more concrete boundaries in a system around which ownership lines can be marked, allowing the team much more flexibility regarding how you reduce this problem. The microservice architecture enables each service to be developed independently by a team that is concentrated on that service. As a result, it produces continuous deployment possible for complex applications. The microservice architecture enables each service to be scaled individually. It has been observed when a team or organization adopts Microservice architecture the legitimate gain is the built-in agility an organization gets.



Quote for the day:

"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney

Daily Tech Digest - August 25, 2021

Forrester exec on robotic process automation’s ‘defining point’

Building a good feedback loop will also be essential. Adding better analytics and process discovery can train the AIs and allow them to deliver better recommendations while also shouldering more of the load. “The integration of process mining, digital work analytics, and machine learning will, in the short term, help generate RPA scripts that mimic the capabilities of humans, and, in the long term, help design more-advanced human-machine or human-in-the-loop (HITL) interaction,” the report stated. Another challenge will be finding the best way to deliver and price the software. Cloud-based services are now common and customers have the choice between installing the software locally or relying upon cloud services managed by the vendor. Many vendors often price their software by the number of so-called “bots” assigned to particular tasks. The report imagines that more fine-grained precision will offer customers the ability to move to “consumption-based pricing” that better follows their usage. “This may be per minute or hour of robot time or per task executed, but it solves the problem of bot underutilization,” the report predicted.


Kubernetes hardening: Drilling down on the NSA/CISA guidance

Your security efforts shouldn’t stop at the pods. Networking within the cluster is also key to ensuring that malicious activities can’t occur, and if they do, they can be isolated to mitigate their impact. In addition to securing the control plane, key recommendations include using network policies and firewalls to both separate and isolate resources and encrypting traffic in motion and protecting sensitive data such as secrets at rest. One core way of doing this is taking advantage of the Kubernetes namespace native functionality. While three namespaces are built-in by default, you can create additional namespaces for your applications. Not only does the namespace construct provide isolation, but it can help use resource policies to limit storage and compute resources at the namespace level as well. This can prevent resource exhaustion, either by accident or maliciously, which can have cascading effect on the entire cluster and all its supported applications. While namespaces can help provide resource isolation, leveraging network policies can control the flow of traffic between the various components including pods, namespaces and external IP addresses.


Conductor: Why We Migrated from Kubernetes to Nomad

The first major issue we ran into related to this job type was the GKE autoscaler. As customers’ workload increased, we started to have incidents where pending jobs were piling up exponentially, but nothing was scaled up. After examining the Kubernetes source code, we realized that the default Kubernetes autoscaler is not designed for batch jobs, which typically have a low tolerance for delay. We also had no control over when the autoscaler started removing instances. It was set to 10 minutes as a static configuration, but the accumulated idle time increased our infrastructure cost as we could not rapidly scale down once there was nothing left to work on. We also discovered that the Kubernetes job controller, a supervisor for pods carrying out batch processes, was unreliable. The system would lose track of jobs and be in the wrong state. And there was another scalability issue. On the control plane side, there was no visibility into the size of the GKE clusters’ control plane.  As load increases, GKE would automatically scale up the control plane instances to handle more requests. 


Attackers Actively Exploiting Realtek SDK Flaws

“Specifically, we noticed exploit attempts to ‘formWsc’ and ‘formSysCmd’ web pages,” SAM’s report on the incident said. “The exploit attempts to deploy a Mirai variant detected in March by Palo Alto Networks. Mirai is a notorious IoT and router malware circulating in various forms for the last 5 years. It was originally used to shut down large swaths of the internet but has since evolved into many variants for different purposes.” The report goes on to link another similar attack to the attack group. On Aug. 6 Juniper Networks found a vulnerability that just two days later was also exploited to try and deliver the same Mirai botnet using the same network subnet, the report explained. “This chain of events shows that hackers are actively looking for command injection vulnerabilities and use them to propagate widely used malware quickly,” SAM said. “These kinds of vulnerabilities are easy to exploit and can be integrated quickly into existing hacking frameworks that attackers employ, well before devices are patched and security vendors can react.”


The difference between digitization, digitalization & digital transformation

Digitization is the process of changing from an analog to digital form, also known as digital enablement. In other words, digitization takes an analog process and changes it to a digital form without any different-in-kind changes to the process itself. ... Now, perhaps more disputed is the definition of digitalization. According to Gartner, we can define it as the use of digital technologies to change a business model and provide new revenue and value-producing opportunities. This means that businesses can start to use their digitized data. Through advanced technologies, businesses will be able to discover the potential of processed digital data and help them achieve their business goals. ... Finally, we are introduced to the concept of digital transformation. Here, Gartner states that digital transformation can refer to anything from IT modernization, for example, Cloud computing, to digital optimization, to the invention of new digital business models. Namely, this is the process of fully benefiting from the enormous digital potential in a business. 


Bootstrapping the Authentication Layer and Server With Auth0.js and Hasura

Hasura is a GraphQL engine for PostgreSQL databases. Hasura is also not the only available GraphQL engine. There are other solutions like Postgraphile and Prisma. However, after trying a few of them, I've come to appreciate Hasura for several reasons: Hasura is designed for client-facing applications and is one of the simplest solutions to set up. With Hasura, you get a production-level GraphQL server out-of-the-box that’s performant and has a built-in caching system; Powerful authentication engine that’s based on the RLS (Row Level Security) that allows building granular and complex permission systems; You can host Hasura on-premise using their Docker image, but you can also set up a working GraphQL server in a matter of minutes using Hasura cloud. This option is perfect for scaffolding your app and is the one we will use today; Hasura's dashboard is powerful and user-friendly. You can write and test your GraphQL queries, manage your database schema, add custom resolvers and create subscriptions, all from one place.


Why Work-From-Home IT Teams May Be at a Greater Risk for Burnout

Typical burnout indicators include a loss of interest, reduced productivity, and an inability to fully discharge their professional duties. “People may also experience high levels of exhaustion, stress, anxiety, and pessimism,” notes Joe Flanagan, senior employment advisor at online employment services provider VelvetJobs. Flanagan stated that burnout can also lead to, or trigger, other mental health issues. “Employers and managers should be trained and sensitized to identify these signs, and teams must have checks and balances to provide support to individuals who are at a higher risk,” he advises. Immediate action is necessary as soon as burnout is suspected in a team or a specific worker, Welch suggests. The solution may be as simple as extending a deadline or offering additional support. He also advises establishing communication channels, such as team video calls, which will allow colleagues to interact with each other, exchanging news, insights, and other types of chitchat. “Every team is different, so look for whatever works for the team,” Welch says.


Post-Brexit: how has data protection compliance changed?

While much of The European Union’s General Data Protection Regulations (GDPR) have been incorporated into UK law, it’s still important to consider what has changed in terms of how companies – particularly UK-based ones – ensure compliance to data protection regulations. It was argued in 2017 by Index Engines that GDPR puts personal data back in the hands of citizens. This raises the question: “Does this still apply?” No matter what has changed, one challenge will remain: organisations’ ability to find business and legal-critical information within their vast unstructured data stores. Then there are the decisions about when to delete and where to store it, when to modify and rectify it. This is a complex issue now involving multiple petabytes of data, and organisations have no real understanding of what their unstructured data contains. With this top of mind, there is arguably a need for Wide Area Network (WAN) acceleration to gain the ability to find and move data around at high speed by mitigating latency and packet loss. This works to provide quicker data access and retrieval.


What the US Army can teach us about building resilient teams

Science and stories are two of the best ways to defeat skepticism. Gen. Casey approached Dr. Seligman and his team at the University of Pennsylvania because it was one of the few known institutions that had conducted large-scale training on resilience and had published extensive peer-reviewed research in the area. It was also the only known entity that had extensive experience developing and implementing a resilience train-the-trainer model that had also been scientifically reviewed. ... Holistic programs have the power to inspire and transform an entire organization and those who work in it, and stories of transformation make the work come to life and help concepts stick. The last place I thought I would learn anything about vulnerability was with US Army drill sergeants. Yet I can speak personally about my own transformation working with them. I used to be someone who never talked about failure or my own challenges. It was too risky, especially when I was practicing law. But the soldiers helped me understand that talking about your obstacles isn’t a sign of weakness—it’s courageous and inspiring. Here are two examples.


How do you lead hybrid teams? 5 essentials

Transparency is often a leadership virtue in any type of organization, but it’s an absolute must for hybrid teams. It’s the basis for mutual trust and productivity when people aren’t consistently working together in the same location. This starts with a clear, highly visible method of setting goals and expectations – and a shared belief in how you’re tracking progress. “Leaders need to be transparent on a shared set of objectives and how they are measuring employee productivity,” says Thomas Phelps, CIO at Laserfiche. “For me, it’s not about how many hours you work or when you were last online.” ... Making broad assumptions about everyone’s shared understanding and experience is probably a bad idea in a hybrid work mode, for example. Make sure you’re checking in with people, listening to them, and making positive changes when they’re in order. Phelps says Laserfiche has been regularly soliciting employee feedback about current and future operational plans since the company’s pivot to fully remote/WFH last year. Nayan Naidu, head of DevOps and cloud engineering capability center at Altimetrik, likewise emphasizes the importance of transparently setting expectations and reinforcing them regularly. 



Quote for the day:

"It is, after all, the responsibility of the expert to operate the familiar and that of the leader to transcend it." -- Henry A. Kissinger

Daily Tech Digest - August 24, 2021

The CISO in 2021: coping with the not-so-calm after the storm

Naturally, the challenges facing the modern CISO are not focused on one front. Those on the receiving end of cyber attacks are of just as much concern as those behind them. More than half believe that users are the most significant risk facing their organisation. And just like the threats from the outside, there are several causing concern from within. Human error, criminal insider attacks and employees falling victim to phishing emails are just some of the issues keeping CISOs up at night. With many users now out of sight, working remotely, at least some of the time, these concerns are more pressing than they may once have been. Nearly half of UK CISOs believe that remote working increases the risk facing their organisation. And it’s easy to see why. Non-corporate environments tend to make us more prone to errors and misjudgement, and in turn, more vulnerable to cyber attack. Working from home also calls for slight alterations to security best practice. The use of personal networks and devices may require increased protocols and protections.


How do I select an automated red teaming solution for my business?

There are, however, tools that can help train defenders or aid in discovering gaps in defensive investment. There are three initial considerations for these tools. For the best defenders, identifying behavior, not static signatures or tools, is crucial. By correlating events and telemetry, they can spot new / unknown tools and react faster. To create this, the simulation tool must run complex chains of techniques based on the environment; checking the OS, downloading an implant, executing persistence, then searching local files before moving laterally, as an example. Secondly, the solution’s techniques must be relevant, basing them on updated imitations of those observed from real actors. Use of threat intelligence will benchmark against genuine attackers instead of generic outdated threats, decreasing the likelihood of defensive gaps. Finally, being able to get metrics on the performance of the current defensive set-ups it requires the solution to integrate with the SIEM. Without this, the ability to gain evidence on MITRE mapped control failing becomes cumbersome and error prone.


What Enterprises Can Learn from Digital Disruption

Operating in today's climate means updating mindsets, processes, budgeting cycles, incentive systems and traditional ways of working. It's not about ping pong tables and arcade rooms. It's being better at delivering on core competencies than competitors and having the digital savviness required to succeed in a digital-first world. However, the most valuable trait is curiosity because curiosity leads to experimentation, innovation, optimization, and learning. “Disruptors face the challenge of explaining the concept and the benefits of the new approach. Many organizations struggle to grasp it and operate under the inertia of business as usual,” says Greg Brady, founder and chairman of supply chain control tower provider One Network Enterprises. “The COVID-19 pandemic has opened the eyes of many executives to the shortcomings of the old way of doing business.” Some organizations attempt to mimic what the digital disrupters do. However, their success tends to depend on the context in which the concept was executed.


Break the Cycle of Yesterday's Logic in Organizational Change and Agile Adoption

Like Tibetan-prayer-wheels, each framework promises to be the best business changer if one follows their special consultancy. Affected by the marketing machinery, executives and senior managers pick one of them. Hoping it will suit them instead of looking to their inner and outer organizational opportunities and boundaries to find real value adding outcomes for their business. These artificial dual operating systems get designed alongside the line organisations with their job descriptions, hierarchies, performance contracts, engineering models and cultural values. Hurdles are preprogramed because for many technical driven enterprises, industrial standards simply don’t scale with agile frameworks. A logical inference is that the necessary variety is very much lost. Operationalization of variety with minimal investment costs are entrapped. Consequently, the change system behavior will be like dandelion seeds - the change will take time, costs will spread, and development transaction costs will increase.


How to choose the best NVMe storage array

NVMe’s parallelism is fundamental to its value. Where SAS-based storage supports a single message queue and 256 simultaneous commands per queue, NVMe ramps this all the way up to 64,000 queues, each with support for 64,000 simultaneous commands. That massive increase is key to enabling you to ramp up the number of VMs on a single physical host, driving greater efficiency and easing management. Identifying individual workloads and planning for growth over time--along with high availability needs and continuity requirements (backup/restore, replication, geo-redundancy, or simply disaster recovery)--can help paint a picture of what you need in an NVMe array. While each of these considerations has the potential to drive up the initial cost of whichever NVMe array you select (or multiple arrays, when you consider redundancy), smart investments that match your needs ultimately reduce your cost of ownership in the long run. NVMe arrays are big-ticket items, so efficient storage practices are critical to making the most of the hardware you buy and extending the lifecycle of your storage media.


Progressive Delivery: A Detailed Overview

In a traditional waterfall model, teams release new features to an entire user base at one time. Using progressive delivery, you roll out features gradually. Here’s how it works: DevOps managers first ship a new feature to release managers for internal testing. Once that’s done, the feature goes to a small batch of users to collect additional feedback, or is incrementally released to more users over time. The final step is a general launch when the feature is ready for the masses. It’s a bit like dipping your toes into the water before diving in. If something goes wrong during a launch, you haven’t exposed your entire user base to it. You can easily roll the feature back if you need to and make changes. Progressive delivery emerged in response to widespread dissatisfaction with the continuous delivery model. DevOps teams needed a way to control software releases and catch issues early on instead of pumping out bug-filled versions to their users, and progressive delivery met this requirement.


Employees Can Be Insider Threats to Cybersecurity. Here's How to Protect Your Organization.

Politics are another strong motivation for employees to become insider threats. For example, an employee might be upset with his or her work situation or job title but can't see a way to fix it because of inter-office politics. This could lead to that employee becoming disgruntled and wanting to take revenge on the company. This situation is common in enterprise-level organizations, where management doesn't take the time to get to know their employees or address their concerns. Providing an environment where employees can reach their full potential and have open lines of communication with their chain of command can help mitigate potential political concerns. This ties closely to professional reasons. For example, employees might feel slighted after being passed over for a promotion, or they might be the target of an internal investigation for misconduct. On the other hand, they could find themselves the target of misconduct by a peer or boss, which could lead them to take matters into their own hands. Humans are emotional creatures, and this, of course, applies to employees as well. 


Three reasons why ransomware recovery requires packet data

SecOps team members or external consultants can comb through the data to find the original malware that caused the attack, determine how it got onto the network in the first place, map how it traversed the network and determine which systems and data were exposed. Note that the storage capacity required to store even a week’s worth of packet data can quickly become prohibitively expensive for high-speed networks. To have a realistic chance of storing a large enough buffer, these organizations will need to be smart about where to capture and how much to capture. One way to do this is to use intelligent packet filtering and deduplication by front-ending the packet capture devices with a packet broker to reduce the amount of data saved. Another method is using integrations between the security tools and the capture devices to only capture packet data correlated with incidents or high alerts. Using a rolling buffer strategy to overwrite the data after a “safe period” has passed will also reduce storage requirements. 


The key to mobile security? Be smarter than your device

What people often forget is that the shiny all-singing, all-dancing device in their pocket is also a highly capable surveillance device, boasting advanced sensory equipment (camera and microphone), and a wealth of tracking information. People just assume that their mobile device is secure and often use it with less care (from a security point of view) for things that they wouldn’t do on a laptop. To this end, we now have a vast industry that sets out to secure and empower productivity on the basis that people can work anywhere and often use their devices for both work and personal use. Mobility and cloud technology have become essential with most people now working and managing their personal lives in a digital fashion. To coin a saying from the world of Spiderman (slightly out of context) — with great power comes great responsibility. We now live in a world where the once humble communication device is now a very powerful tool that needs to be used responsibly in the face of those wishing to act in a nefarious way. 


How to Develop a Data-Literate Workforce

You probably already know the importance of data literacy, but to frame this article, let's position the benefits in a modern data governance setting. The best way to do so is to use an example where the absence of data literacy led to disastrous consequences. There are many well-known examples of data literacy issues leading to extreme failures. However, one of the most significant occurred at NASA in 1999 and led to the loss of a $125 million Mars probe. The probe burnt up as it descended through the Martian atmosphere because of a mathematical error caused by conflicting definitions. The navigation team at NASA's Jet Propulsion Laboratory (JPL) worked in the metric system (meters and millimeters), while Lockheed Martin Astronautics, the company responsible for designing and building the probe, provided the navigation team with acceleration data in imperial measurements (feet, pounds, and inches). Because there were no common terms or definitions in place, the JPL team read the data inaccurately and failed to quantify the speed at which the craft was accelerating. The result was catastrophic, but it could have been easily avoided if a system of data literacy had been in place.



Quote for the day:

"The first key to leadership is self-control" -- Jack Weatherford

Daily Tech Digest - August 23, 2021

Is this the end of the Point of Sale (PoS)?

The best part of all about this, in my opinion, is the further digital acceleration it affords us. It allows you to retire old equipment that’s often temperamental; you get to integrate quicker; and you get to deliver new digital interactions far quicker than waiting for a PoS integration team. Testing becomes simplified, and all devices become commodity mobile phones and tablets. The icing on the cake is that the barrier for entry is incredibly low; you can integrate with a payment system for next to no cost, and being a service provider, they’ve made it as simple as possible. The integration of an app-based PoS into an app ecosystem allows for a single, seamless journey that’s personal to the customer, empowering, and overall just a better experience for many users. However, one of the hurdles to get over is the level of app installation fatigue, as not everybody wants an app per place they visit. This is a huge opportunity for Uber equivalents to come in and provide a unified platform (which is working well for things like food delivery), as mobile-first web apps aren’t always a very slick experience.


World Bank Launches Global Cybersecurity Fund

The new cybersecurity initiative aims to accelerate digital transformation by improving governments' technical capabilities and their efforts to increase security awareness. A spokesperson for the World Bank tells Information Security Media Group that associated funds will be disbursed "using diverse implementation models" to catalyze specific cybersecurity investments. The amount of funding to be provided was not revealed. The bank calls particular attention to security investments that improve critical infrastructure - including the energy, transportation, finance and healthcare sectors. "These systems [designed prior to, or during the early years of the digital revolution] … are today highly vulnerable to cybersecurity attacks with possibly serious outcomes," the bank says on the fund's dedicated webpage. The World Bank spokesperson says its new funding can help improve cybersecurity awareness at the national level and enable governments to identify risks, fund technical solutions and prepare for infrastructure investments.


How attackers could exploit breached T-Mobile user data

T-Mobile is offering all impacted customers a free two-year subscription for McAfee's ID Theft Protection Service, which includes credit monitoring, full-service identity restoration, identity insurance, dark web monitoring, and more. Business and postpaid customers can also enable T-Mobile's Account Takeover Protection service for free and all T-Mobile users can use the company's Scam Shield app that enables caller ID and automatically blocks calls flagged as scams. More generally, all mobile subscribers should check with their carriers what options they have to secure their accounts against SIM swapping or number porting and they should enable that additional verification. Using text messages or phone calls for two-factor authentication should be disabled where possible in favor of two-factor authentication via a mobile app or a dedicated hardware token, especially for high-value accounts. Email accounts are high-value accounts because they are used to confirm password reset requests for most other online accounts. Finally, be wary of email or text messages that ask for sensitive information such as passwords, PINs, access tokens, or that direct you to websites that ask for such information


Open Banking Transforming Business Models Forever

The potential to use APIs to broaden relationships and improve the customer experiences has exploded over the past decade, with platform organizations such as Apple, Google, Amazon. Uber and Facebook using the model to grow exponentially and grab significant market share from established firms, including banks and credit unions. But, you don’t have to be a tech giant to benefit from APIs — the opportunity is being leveraged in virtually every industry and by organizations of all sizes. In fact, small and midsize financial institutions that want to reach digital audiences beyond their existing geography or traditional product set can leverage open APIs. The options include creating an independent platform, partnering to jointly create a platform, or becoming part of another platform’s ecosystem. And there are many third-party solution providers who are willing to assist. According to the Harvard Business Review, “Smaller firms could have an agility advantage by unbundling their capabilities, designing for their consumers, and exploiting opportunities in their respective ecosystems.


10 Tips to Overcome Obstacles of AI-Enabled Digital Transformation

The bottom line: don’t add too many unknowns to your transformation program. AI projects require iterative testing and evolution of supporting processes and clean, consistent, well-architected data is the price of admission. Don’t assume that the data is in place and usable for the target process, and don’t take the promises of vendors or status of program leaders far removed from the front lines as reality. The best way to determine whether supporting processes and data are at the level required for success is through competitive benchmarking, internal benchmarking, heuristic evaluations, and maturity assessments. You need objective metrics to know if your data is adequate. A heuristic (collection of best practices and rules of thumb) evaluation can provide a snapshot of how well the organization is doing on current efforts. What does the organization have to work with? Are foundational processes and data quality strong? Or does strengthening the foundation require significant time and effort? A maturity assessment cuts across multiple dimensions that may appear beyond the scope of the domain but would impact downstream processes for a given area.


Defence in Depth – Time to start thinking outside the box

By embedding another link within an article, linked to from an email, this lured the recipient into clicking a bad link, and bypassed the normal scanning tools. This illustrates that even with anti-phishing in place, defences can still be breached. So, what could have been done to prevent this? Firstly, you might be asking why the IPS solution didn’t prevent this in the first place. Normally, it would; however, these days, we are mostly at home, so the average home router does not have this functionality and people are not always connected to a VPN. To analyse what went wrong, and to prevent further attacks, we firstly checked the Cyber-attack Chain, or the Mitre Att&ck framework. This is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations. This helped us to understand how an attacker had bypassed the previous measures. When we dug deeper, we saw there had been a successful Defence Evasion; the email solution was exploited by allowing a phishing email through. This could have easily led to Credential Access or Installation with further persistence. 


Counterintuitive Strategies for the Digital Economy

The most common explanations include running out of capital, underestimating the competition, or overinvesting in the product. “Many companies think ‘if I add in another feature I’ll grow,’ but companies which are more focused on the fit and how to address it to the buyer tend to do better,” said Finkeldey. Other reasons to fail include incorrect go-to-market business models, a lack of business model fit, and poor marketing. This last one was an interesting inclusion and was backed up with some Gartner research. Often, marketing, brand building, and thought leadership can be seen as luxuries, but they are key to opening up new markets and achieving growth and success. Finkeldey’s Gartner colleague Alastair Woolcock pointed out that around 47% of the operational spend could be on sales and marketing among successful companies in the SaaS space. For those spending much less than that, say up to 15%, just increasing it by 5% or 10% was not the answer. “Stepping in half the way only gets them half the way,” said Woolcock. So, while the temptation is to “run and hire a bunch of sellers,” outsourcing this function was often misplaced investment in the current market.


Why automated pentesting won’t fix the cybersecurity skills gap

Security teams need to have the adversarial or hacker mindset – i.e., they have to think as an attacker. They need to stay a step ahead of the cyber criminals and advise the rest of the organization on the important and timely actions to take. Not every vulnerability is obvious. The best way to defend the enterprise is for defenders to think like attackers and try harder every time they seemingly hit a dead-end – not giving up easily on something they see that doesn’t make sense. Successfully defending systems, networks, and applications requires not only an understanding of the tools an attacker could use, but how they use them and when they use them. This requires a lot of judgement calls, asking a lot of questions that start with “why”, and those cannot be accomplished with automated tests. Automated tests are only as good as what you tell them to look for and do. What makes security hard is that each time, the attacker is doing something different and new. Attackers don’t need a massive vulnerability to impact organizations – they are patient, waiting for an individual to make a mistake to let them in, either via phishing or social engineering.


Hackers are getting better at their jobs, but people are getting better at prevention

One of the other issues, though, that you should realize is that even if there is going to be federal legislation, it's only going to make a difference if it overrides and preempts state laws, and the states do not want that to happen. The states want to protect their own people, and any law that would be adopted on the federal level would be unlikely to be as comprehensive as some of the state laws. But in any case, I'll tell you that in order to comply with these laws, any one of them, California for example, requires a great deal of work. It requires an understanding of all the data you collect, who has access to that data, where it's stored, who uses that data, who in your supply chain is involved in that project. And that is a very, very big endeavor. Now, it's a very valuable endeavor because a company that understands its collection and use of data is going to understand its business much, much better. I've actually seen companies that go through that process and realize that they can improve their businesses, but it's like going on a diet and working out. 


Top 6 Time Wastes as a Software Engineer

There's a delicate balance that you've to take care of while choosing between automation and manual testing. So let's understand how you, as a software engineer, can use this to work out an efficient testing strategy. It's easy to write a small manual test to ensure that the new feature you added is working fine. But when you scale, running those manual tests needs more hours off the clock, especially when you're trying to find that pesky bug that keeps breaking your code. If your application or website has many components, the chances of you not running a specific test by mistake also increase. Automated tests or even a system to run tests more efficiently helps avoid this. You would need to spend a bit more time setting up your automated tests. Once they are written, though, they can be reused and triggered as soon as you make any code changes. So you don't have to manually re-test previous functions just because you added a new one. Conversely, choosing the right tasks to automate is just as important. Unfortunately, it is one of the most common mistakes of QA automation testing. It's tempting to fall into the trap of over-automating things and end up replicating tests script-by-script.



Quote for the day:

"Successful leadership requires positive self-regard fused with optimism about a desired outcome." -- Warren Bennis

Daily Tech Digest - August 22, 2021

Move Fast Without Breaking Things in ML

The first step in the response to the problem has happened even before you got invited to the call with your CTO. The problem has been discovered and the relevant people have been alerted. This is likely the result of a metric monitoring system that is responsible for ensuring important business metrics don’t go off track. Next using your ML observability tooling, which we will talk a bit more about in a second, you are able to determine that the problem is happening in your search model since the proportion of users who are engaging with your top n-links returned has dropped significantly. After learning this you rely on your model management system to either roll back to your previous search ranking model or deploy a naive model that can hold you over in the interim. This mitigation is what stops your company from losing (as much) money every minute since every second counts for users being served incorrect products. Now that things are somewhat working again, you need to look back to your model observability tools to understand what happened with your model.


Ransomware is the top cybersecurity threat we face, warns cyber chief

Not only are cyber-criminal ransomware groups encrypting networks and demanding a significant payment in exchange for the decryption key, now it's common for them to also steal sensitive information and threaten to release it unless a ransom is paid – often leading victims to feel as if they have no choice but to give in to the extortion demands. "As the business model has become more and more successful, with these groups securing significant ransom payments from large profitable businesses who cannot afford to lose their data to encryption or to suffer the down time while their services are offline, the market for ransomware has become increasingly professional," Cameron will say. Ransomware is successful because it works; in many cases, because organisations still don't have the appropriate cyber defences in place to prevent cyber criminals infiltrating their network in the first place in what the NCSC CEO describes as "the cumulative effect of a failure to manage cyber risk and the failure to take the threat of cyber criminality seriously".


Become software engineers, not software integrators.

Ever since its inception, the IT industry has been evolving every day, by giving better and more awesome technology experiences to end-users. On the other hand, the industry has also continually focused on reducing the development time and cycle for software engineering teams. A significant portion of IT engineers & organizations are motivated to ease the development process. This in turn has become a race to give the best technologies (frameworks, tools, etc.) to engineering teams. In this race, their focus has gradually shifted from “ease of development” to almost “no development at all”, i.e. making tools, which allow the engineers to just integrate stuff to provide the final product. Essentially, plug and play. Of course, the big advantages because of this are that: Now the companies which are building software for businesses can focus more on business ideas; and With a reduced development cycle, companies can build many more software products. However, the concern starts when engineers, who get used to the plug & play tools, start losing core engineering skills like optimizing, maturing, and architecting the code.


How External IT Providers Can Adopt DevOps Practices

The key is to overcome waterfall thinking. A modern supplier will work in small batches and will use an experimental approach to product development. The supplier’s product development team will create hypotheses and valid them with small product increments, ideally in production. According to my experience, many IT suppliers use agile software development and Continuous Integration these days. But they stop their iterative approach at the boundary to production. One problem of having separated silos for development and operations is that in most cases these two silos have different goals (dev = throughput, ops = stability), Diener mentioned. In contrast, a DevOps team has a common business goal. ... In order to adopt DevOps practices, the supplier has to find out what his client’s goal is. It has to become the supplier’s goal as well. We at cosee use product vision workshops to shape and document the client’s goal (impact) and its user’s needs (outcome). That’s a prerequisite for an iterative and experimental product development approach.


Blockchain in Space: What’s Going on 4 Years After the First Bitcoin Transaction in Orbit?

The growth in both scale and affordability of space exploration is creating a whole new sector — the Space Economy, as the United Nations Office for Outer Space Affairs already calls it. An inevitable question then arises: what money will the players in this space economy use? ... Despite all the advances, space exploration often remains a costly business, both in money and science capital. Because of that high cost nature, any large project in space requires the cooperation of numerous private companies, each providing resources and talent. And the most ambitious programs are collaborations between governments — not all of which necessarily put a lot of trust in each other. This is where one of blockchain’s key advantages comes in: it enables the exchange of value and data between independent parties in a way that doesn’t involve trust. With smart contracts, peer-to-peer transaction settlement, and the transparency and accountability enabled by public blockchain records


Upcoming Trends in DevOps and SRE in 2021

Service meshes are quickly becoming an essential part of the cloud-native stack. A large cloud application may require hundreds of microservices and serve a million users concurrently. A service mesh is a low-latency infrastructure layer that allows high traffic communication between different components of a cloud application(databases, frontends, etc.) This is done via application programming interfaces (APIs). Most distributed applications today have a load balancer that directs traffic; however, most load balancers are not equipped to deal with a large number of dynamic services whose locations/counts vary over time. To ensure that large volumes of data are sent to the correct endpoint, we need tools that are more intelligent than traditional load balancers. This is where Service Meshes come into the picture. In typical microservice applications, the load balancer or firewall is programmed with static rules. However, as the number of microservices increases and the architecture changes dynamically, these rules are no longer enough. 


How GPT-3 and Artificial Intelligence Will Destroy the Internet

As a natural language processor and generator, GPT-3 is a language learning engine that crawls existing content and code to learn patters, recognizes syntax and can produce unique outputs based on prompts, questions and other inputs. But GPT-3 is more than just for use by content marketers as witness by the recent OpenAI partnership with Github for creating code using a tool dubbed “Copilot.” The ability to use autoregressive language modeling doesn’t just apply to human language, but also various types of code. The outputs are currently limited, but its future potential use could be vast and impacting. How GPT-3 is Currently Kept at Bay With current beta access to the OpenAI API, we developed our own tool on top of the API. The current application and submission process with OpenAI is stringent. Once an application has been developed before it can be released to the public for use in any commercial application, OpenAI requires a detailed submission and use case for approval by the OpenAI team. 


NFTs, explained

“Non-fungible” more or less means that it’s unique and can’t be replaced with something else. For example, a bitcoin is fungible — trade one for another bitcoin, and you’ll have exactly the same thing. A one-of-a-kind trading card, however, is non-fungible. If you traded it for a different card, you’d have something completely different. You gave up a Squirtle, and got a 1909 T206 Honus Wagner, which StadiumTalk calls “the Mona Lisa of baseball cards.” (I’ll take their word for it.) At a very high level, most NFTs are part of the Ethereum blockchain. Ethereum is a cryptocurrency, like bitcoin or dogecoin, but its blockchain also supports these NFTs, which store extra information that makes them work differently from, say, an ETH coin. It is worth noting that other blockchains can implement their own versions of NFTs. (Some already have.) NFTs can really be anything digital (such as drawings, music, your brain downloaded and turned into an AI), but a lot of the current excitement is around using the tech to sell digital art.


Demystifying AI: The prejudices of Artificial Intelligence (and human beings)

In a way, the results of these algorithms hold a mirror to human society. They reflect and perhaps even amplify the issues already present. We know that these algorithms need data to learn. Their predictions are only as good as the data they are trained on and the goal they are set to achieve. The data needed to train these algorithms is huge (think millions and above). Suppose we are trying to develop an algorithm to identify cats and dogs from pictures. Not only do we need thousands of pictures of cats and dogs, but they should be labeled (say the cat is class 0 and dog is class 1) so that the algorithm can understand. We can download these images off the internet (the ethics of which is questionable), but still, they need to be labeled manually. Now, consider the complexity and effort required to correctly label a million images in one thousand classes. Often this labeling task is done by “cheap labor” who may or may not have the motivation to do it correctly, or they simply make mistakes. Another problem in the data set is that of class imbalance. 


Three Mistakes That Will Ruin Your Multi-Cloud Project (and How to Avoid Them)

A multi-cloud strategy only augments the likelihood of experiencing one of these errors. The complexity of multiple clouds provides an extended attack surface for threat actors. An increased number of services means a higher chance of experiencing a misconfiguration or data leak. Centralized visibility and management are necessary to combat risk and ensure protection and compliance across multi-cloud environments. Proper governance requires a full view of the cloud, complete with resource consumption, how new services are accessed, and systems in place for risk mitigation, including data and privacy policies and processes. Rather than a cyclically executed process, risk management must be continuous and contain various coordinated actions and tasks in order to oversee and manage risks. An ecosystem-wide framework going beyond traditional IT is necessary for proper risk management. Enterprises must therefore prioritize training and awareness within their organization, teaching team members how to securely use multiple cloud services. 



Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad

Daily Tech Digest - August 21, 2021

Can AGI take the next step toward genuine intelligence?

To take the next step on the road to genuine intelligence, AGI needs to create its underpinnings by emulating the capabilities of a three-year-old. Take a look at how a three-year-old playing with blocks learns. Using multiple senses and interaction with objects over time, the child learns that blocks are solid and can’t move through each other, that if the blocks are stacked too high they will fall over, that round blocks roll and square blocks don’t, and so on. A three-year-old, of course, has an advantage over AI in that he or she learns everything in the context of everything else. Today’s AI has no context. Images of blocks are just different arrangements of pixels. Neither image-based AI (think facial recognition) nor word-based AI (like Alexa) has the context of a “thing” like the child’s block which exists in reality, is more-or-less permanent, and is susceptible to basic laws of physics. This kind of low-level logic and common sense in the human brain is not completely understood but human intelligence develops within the context of human goals, emotions, and instincts. Humanlike goals and instincts would not form the best basis for AGI.


How to take advantage of Android 12’s new privacy options

First and foremost in the Android 12 privacy lineup is Google’s shiny new Privacy Dashboard. It’s essentially a streamlined command center that lets you see how different apps are accessing data on your device so you can clamp down on that access as needed. ... Next on the Android 12 privacy list is a feature you’ll occasionally see on your screen but whose message might not always be obvious. Whenever an app is accessing your phone’s camera or microphone — even if only in the background — Android 12 will place an indicator in the upper-right corner of your screen to alert you. When the indicator first appears, it shows an icon that corresponds with the exact manner of access. But that icon remains visible only for a second or so, after which point the indicator changes to a tiny green dot. So how can you know what’s being accessed and which app is responsible? The secret is in the swipe down: Anytime you see a green dot in the corner of your screen, swipe down once from the top of the display. The dot will expand back to that full icon, and you can then tap it to see exactly what’s involved.


Achieving Harmonious Orchestration with Microservices

The interdependency of your microservices-based architecture also complicates logging and makes log aggregation a vital part of a successful approach. Sarah Wells, the technical director at the Financial Times, has overseen her team’s migration of more than 150 microservices to Kubernetes. Ahead of this project, while creating an effective log aggregation system, Wells cited the need for selectively choosing metrics and named attributes that identify the event, along with all the surrounding occurrences happening as part of it. Correlating related services ensures that a system is designed to flag genuinely meaningful issues as they happen. In her recent talk at QCon, she also notes the importance of understanding rate limits when constructing your log aggregation. As she pointed out, when it comes to logs, you often don’t know if you’ve lost a record of something important until it’s too late. A great approach is to implement a process that turns any situation into a request. For instance, the next time your team finds itself looking for a piece of information it deems useful, don’t just fulfill the request, log it with your next team’s process review to see whether you can expand your reporting metrics.


How Ready Are You for a Ransomware Attack?

Setting the bar high enough to protect against initial entry is a laudable goal, but also adheres to the law of diminishing returns. This means the focus must shift towards improving how difficult it is for an attacker to move around your environment once they have gotten inside. This phase of the attack often requires some manual control, so identifying and disrupting command and control (C2) channels can pay significant dividends – but realize that only the least sophisticated attacker will reuse the same domains and IPs of a previous attack. So rather than looking for C2 communications via threat intel feeds, your approach needs to be to look for patterns of behavior which look like remote-access trojans (RATs) or hidden tunnels (suspicious forms of beaconing). Barriers to privilege escalation and lateral movement come down to cyber-hygiene related to patching (are there easily accessible exploits for local privilege escalation?), rights management (are accounts granted overly generous privileges?) and network segmentation (is it easy to traverse the network?). Most of the current raft of ransomware attacks have utilized the serial compromise of credentials to move from the initial point-of-entry to more useful parts of the network.


The rise and fall of merit

Wooldridge identifies Plato’s Republic as the origin of the concept of meritocracy, in which the Athenian philosopher imagined a society run by an intellectual elite, “who have the ability to think more deeply, see more clearly and rule more justly than anyone else.” Crucially, Plato’s ruling class was remade each generation—aristocrats were not assumed to pass on their talents—and it prized women as highly as men. Wooldridge finds meritocratic leanings in other pre-modern societies, including China, which began in the fifth century to use exams to recruit civil servants. But it was the expansion of the state in Europe in the early modern period that saw meritocracy first take root, albeit in a paradoxical way. As states expanded, demand for capable bureaucrats outgrew the ability of the aristocracy to produce them. The solution was to look downward and offer patronage to talented lowborns. Men such as French dramatist Jean Racine; London diarist Samuel Pepys; economist Adam Smith; and Henry VIII’s right-hand man, Thomas Cromwell, were all plucked from obscurity by favoritism. 


Intel Advances Architecture for Data Center, HPC-AI and Client Computing

This x86 core is not only the highest performing CPU core Intel has ever built, but it also delivers a step function in CPU architecture performance that will drive the next decade of compute. It was designed as a wider, deeper and smarter architecture to expose more parallelism, increase execution parallelism, reduce latency and increase general purpose performance. It also helps support large data and large code footprint applications. Performance-core provides a Geomean improvement of about 19%, across a wide range of workloads over our current 11th Gen Intel® Core™ architecture (Cypress Cove core) at the same frequency. Targeted for data center processors and for the evolving trends in machine learning, Performance-core brings dedicated hardware, including Intel's new Advanced Matrix Extensions (AMX), to perform matrix multiplication operations for an order of magnitude performance – a nearly 8x increase in artificial intelligence acceleration.1 This is architected for software ease of use, leveraging the x86 programing model.


A Soft, Wearable Brain–Machine Interface

Being both flexible and soft, the EEG scalp can be worn over hair and requires no gels or pastes to keep in place. The improved signal recording is largely down to the micro-needle electrodes, invisible to the naked eye, which penetrate the outermost layer of the skin. "You won't feel anything because [they are] too small to be detected by nerves," says Woon-Hong Yeo of the Georgia Institute of Technology. In conventional EEG set-ups, he adds, any motion like blinking or teeth grinding by the wearer causes signal degradation. "But once you make it ultra-light, thin, like our device, then you can minimize all of those motion issues." The team used machine learning to analyze and classify the neural signals received by the system and identify when the wearer was imagining motor activity. That, says Yeo, is the essential component of a BMI, to distinguish between different types of inputs. "Typically, people use machine learning or deep learning… We used convolutional neural networks." This type of deep learning is typically used in computer vision tasks such as pattern recognition or facial recognition, and "not exclusively for brain signals," Yeo adds. 


How to proactively defend against Mozi IoT botnet

While the botnet itself is not new, Microsoft’s IoT security researchers recently discovered that Mozi has evolved to achieve persistence on network gateways manufactured by Netgear, Huawei, and ZTE. It does this using clever persistence techniques that are specifically adapted to each gateway’s particular architecture. Network gateways are a particularly juicy target for adversaries because they are ideal as initial access points to corporate networks. Adversaries can search the internet for vulnerable devices via scanning tools like Shodan, infect them, perform reconnaissance, and then move laterally to compromise higher value targets—including information systems and critical industrial control system (ICS) devices in the operational technology (OT) networks. By infecting routers, they can perform man-in-the-middle (MITM) attacks—via HTTP hijacking and DNS spoofing—to compromise endpoints and deploy ransomware or cause safety incidents in OT facilities. In the diagram below we show just one example of how the vulnerabilities and newly discovered persistence techniques could be used together.


CBAP certification: A high-profile credential for business analysts

CBAP is the most advanced of IIBA’s core sequence of credentials for business analysts. It follows the Entry Certificate in Business Analysis (ECBA) and the Certification for Competency in Business Analysis (CCBA). As you might expect, the requirements get more extensive as you climb the ladder: CBAP requires more training, work experience, and knowledge area expertise. AdaptiveUS, a company that offers training for all of IIBA’s certs, breaks down the various requirements, but the important thing to know is that CBAP holders are at the top of the heap; while you don’t need to have the lower-level certs to get your CBAP certification, you should be fairly well established in your career as a BA before you consider it. Like IIBA’s other certs, the CBAP draws from A Guide to the Business Analysis Body of Knowledge, also known as the BABOK Guide. The BABOK Guide is a publication from IIBA that aims to serve as a bible for the business analysis industry, collecting best practices from real-world practitioners. It was first published in 2005 and is continuously updated. 


A Short Introduction to Apache Iceberg

Partitioning reduces the query response time in Apache Hive as data is stored in horizontal slices. In Hive partitioning, partitions are explicit and appear as a column and must be given partition values. Due to this approach, Hive having several issues like not being able to validate partition values is so fully dependent on the writer to produce the correct value, 100% dependent on the user to write queries correctly, Working queries are tightly coupled with the table’s partitioning scheme, so partitioning configuration cannot be changed without breaking queries, etc. Apache Iceberg introduces the concept of hidden partitioning where the reading of unnecessary partitions can be avoided automatically. Data consumers that fire the queries don’t need to know how the table is partitioned and add extra filters to their queries. Iceberg partition layouts can evolve as needed. Iceberg can hide partitioning because it does not require user-maintained partition columns. Iceberg produces partition values by taking a column value and optionally transforming it.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.