Daily Tech Digest - December 31, 2021

Can blockchain solve its oracle problem?

The so-called oracle problem may not be intractable, however — despite what Song suggests. “Yes, there is progress,” says Halaburda. “In supply-chain oracles, we have for example sensors with their individual digital signatures. We are learning about how many sensors there need to be, and how to distinguish manipulation from malfunction from multiple readings.” “We are also getting better in writing contracts taking into account these different cases, so that the manipulation is less beneficial,” Halaburda continues. “In DeFi, we also have multiple sources, and techniques to cross-validate. While we are making progress, though, we haven’t gotten to the end of the road yet.” As noted, oracles are critical to the emerging DeFi sector. “In order for DeFi applications to work and provide value to people and organizations around the world, they require information from the real world — like pricing data for derivatives,” Sam Kim, partner at Umbrella Network — a decentralized layer-two oracle solution — tells Magazine, adding:


Putting the trust back in software testing in 2022

Millions of organisations rely on manual processes to check the quality of their software applications, despite a fully manual approach presenting a litany of problems. Firstly, with more than 70% of outages caused by human error, testing software manually still leaves companies highly prone to issues. Secondly, it is exceptionally resource-intensive and requires specialist skills. Given the world is in the midst of an acute digital talent crisis, many businesses lack the personnel to dedicate to manual testing. Compounding this challenge is the intrinsic link between software development and business success. With companies coming under more pressure than ever to release faster and more regularly, the sheer volume of software needing testing has skyrocketed, placing a further burden on resources already stretched to breaking point. Companies should be testing their software applications 24/7 but the resource-heavy nature of manual testing makes this impossible. It is also demotivating to perform repeat tasks, which generally leads to critical errors in the first place. 


December 2021 Global Tech Policy Briefing

CISA and the National Security Administration (NSA), in the meantime, offered a second revision to their 5G cybersecurity guidance on December 2. According to CISA’s statement, “Devices and services connected through 5G networks transmit, use, and store an exponentially increasing amount of data. This third installment of the Security Guidance for 5G Cloud Infrastructures four-part series explains how to protect sensitive data from unauthorized access.” The new guidelines run on zero-trust principles and reflect the White House’s ongoing concern with national cybersecurity. ... On December 9, the European Commission proposed a new set of measures to ensure labor rights for people working on digital platforms. The proposal will focus on transparency, enforcement, traceability, and the algorithmic management of what it calls, in splendid Eurocratese, “digital labour platforms.” The number of EU citizens working for digital platforms has grown 500 percent since 2016, reaching 28 million, and will likely hit 43 million by 2025. Of the current 28 million, 59 percent work with clients or colleagues in another country. 


10 Predictions for Web3 and the Cryptoeconomy for 2022

Institutions will play a much bigger role in Defi participation — Institutions are increasingly interested in participating in Defi. For starters, institutions are attracted to higher than average interest-based returns compared to traditional financial products. Also, cost reduction in providing financial services using Defi opens up interesting opportunities for institutions. However, they are still hesitant to participate in Defi. Institutions want to confirm that they are only transacting with known counterparties that have completed a KYC process. Growth of regulated Defi and on-chain KYC attestation will help institutions gain confidence in Defi. ...  Defi insurance will emerge — As Defi proliferates, it also becomes the target of security hacks. According to London-based firm Elliptic, total value lost by Defi exploits in 2021 totaled over $10B. To protect users from hacks, viable insurance protocols guaranteeing users’ funds against security breaches will emerge in 2022. ... NFT Based Communities will give material competition to Web 2.0 social networks — NFTs will continue to expand in how they are perceived.


Firmware attack can drop persistent malware in hidden SSD area

Flex capacity is a feature in SSDs from Micron Technology that enables storage devices to automatically adjust the sizes of raw and user-allocated space to achieve better performance by absorbing write workload volumes. It is a dynamic system that creates and adjusts a buffer of space called over-provisioning, typically taking between 7% and 25% of the total disk capacity. The over-provisioning area is invisible to the operating system and any applications running on it, including security solutions and anti-virus tools. As the user launches different applications, the SSD manager adjusts this space automatically against the workloads, depending on how write or read-intensive they are. ... One attack modeled by researchers at Korea University in Seoul targets an invalid data area with non-erased information that sits between the usable SSD space and the over-provisioning (OP) area, and whose size depends on the two. The research paper explains that a hacker can change the size of the OP area by using the firmware manager, thus generating exploitable invalid data space.


'Businesses need to build threat intelligence for cybersecurity': Dipesh Kaura, Kaspersky

Organizations across industries are faced with the challenge of cybersecurity and the need to build threat intelligence holds equal importance for every business that thrives in a digital economy. While building threat intelligence is crucial, it is also necessary to have a solution that understands the threat vectors for every business, across every industry. A holistic threat intelligence solution looks at every nitty-gritty of an enterprise's security framework and gets the best actionable insights. A threat intelligence platform must capture and monitor real-time feeds from across an enterprise's digital footprint and turn them into insights to build a preventive posture, instead of a reactive one. It must diagnose and analyze security incidents on hosts and the network and signals from internal systems against unknown threats, thereby minimizing incident response time and disrupt the kill chain before critical systems and data are compromised. 


IT leadership: 3 ways to show gratitude to teams

If someone on your team takes initiative on a project, let them know that you appreciate them. Pull them aside, look them in the eye and speak truthfully about how much their extra effort means to you, the team, and the company. Make your thank-you’s genuine, direct, and personal. Most individuals value physical tokens of appreciation in addition to expressed gratitude. If you choose to offer a gift, make it as personalized as you can. For example, an Amazon gift card is nice – but a cake from their favorite bakery is even nicer. Personalization means that you’ve thought about them as a person, taken the time to consider what they like, and recognize their contributions as an individual. Contrary to the common belief that we should be lavish with our praises, I would argue that it’s better to be selective. Recognize behavior that lives up to your company’s values and reserve the recognition for situations where it is genuinely deserved. If a leader showers praise when it’s not really warranted, they devalue the praise that is given when team members actually go above and beyond.


Top 5 AI Trends That Will Shape 2022 and Beyond

Under the umbrella of technology, there are several terms with which you must be already familiar, such as artificial intelligence, machine learning, deep learning, blockchain technology, cognitive technology, data processing, data science, big data, and the list is endless. Just imagine, how would it be to survive in the pandemic outbreak if there would be no technology? What if there would be no laptops, PCs, tablets, smartphones, or any sort of gadgets during COVID-19? How would human beings earn for their survival and living? What if there would be no Netflix to binge-watch or no social media application during coronavirus? Undoubtedly, that’s extremely intimidating and intriguing at the same time. Isn’t it giving you goosebumps wondering how fast the technology is advancing? Let’s flick through some jaw-dropping statistics first. Did you know that there are more than 4.88 billion mobile phone users all across the world now? According to the technology growth statistics, almost 62% of the world’s population own a smartphone device.


Introducing the Trivergence: Transformation driven by blockchain, AI and the IoT

Blockchain is the distributed ledger technology underpinning the cryptocurrency revolution. We call it the internet of value because people can use blockchain for much more than recording crypto transactions. Distributed ledgers can store, manage and exchange anything of value — money, securities, intellectual property, deeds and contracts, music, votes and our personal data — in a secure, private and peer-to-peer manner. We achieve trust not necessarily through intermediaries like banks, stock exchanges or credit card companies but through cryptography, mass collaboration and some clever code. In short, blockchain software aggregates transaction records into batches or “blocks” of data, links and time stamps the blocks into chains that provide an immutable record of transactions with infinite levels of privacy or transparency, as desired. Each of these foundational technologies is uniquely and individually powerful. However, when viewed together, each is transformed. This is a classic case of the whole being greater than the sum of its parts.


Sustainability will be a key focus as the transport sector transitions in 2022

Delivery is also an area where we expect to see the movement towards e-fleets grow. We’ve already seen this being trialled, with parcel-delivery company DPD making the switch to a fully electric fleet in Oxford. It’s estimated that by replicating this in more cities, DPD could reduce CO2 by 42,000 tonnes by 2025. While third-party delivery companies offer retailers an efficient service, carrying as many as 320 parcels a day, this model is challenged when it comes to customers’ growing expectations they can receive deliveries within hours. Sparked by lockdowns, which led to a 48% increase in online shopping, the “rapid grocery delivery” trend looks set grow in 2022. Grocery delivery company Getir, for example, built a fleet of almost 1,000 vehicles in 2021 to service this need – and is planning to spend £100m more to expand its offering. Given the current driver recruitment crisis, which is currently affecting delivery and taxi firms, we are not expecting many other operators to invest that kind of money into building new fleets though. Instead, you are more likely to see retailers working with existing fleets. 



Quote for the day:

"Cream always rises to the top...so do good leaders." -- John Paul Warren

Daily Tech Digest - December 30, 2021

The CISO as Sustaining Force: Helping Infosec Staff Beat Burnout

CISOs should also be looking for common signs of burnout itself which team members might be exhibiting, including: A sharp drop in quantity and timeliness of output; A general lack of energy and enthusiasm around job functions; Continual signs of anxiety and stress; An extreme irritability toward co-workers and duties; and Significant changes in social patterns with co-workers. If some of these characteristics are present, the CISO has a few options for addressing them. One is to examine possible workload issues. Even the most resilient team members can burn out if the workload is crushing. If a staffer is exhibiting signs of burnout, an assessment can be made as to whether certain tasks should be spread out among other staffers, if possible. When taking this route, it's important for the CISO to let team members know that this is being done to gain more scale, not as a punitive measure. If the burnout signs point to an especially stressful infosec assignment, such as protecting assets from threats that are rapidly increasing, a discussion regarding giving the staffer more support may help them feel less alone in a challenging situation.


3 big problems with datasets in AI and machine learning

Researchers at the University of California, Los Angeles, and Google investigated the problem in a recently published study titled “Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research.” They found that there’s “heavy borrowing” of datasets in machine learning — e.g., a community working on one task might borrow a dataset created for another task — raising concerns about misalignment. They also showed that only a dozen universities and corporations are responsible for creating the datasets used more than 50% of the time in machine learning, suggesting that these institutions are effectively shaping the research agendas of the field. “SOTA-chasing is bad practice because there are too many confounding variables, SOTA usually doesn’t mean anything, and the goal of science should be to accumulate knowledge as opposed to results in specific toy benchmarks,” Denny Britz, a former resident on the Google Brain team, told VentureBeat in a previous interview. “There have been some initiatives to improve things, but looking for SOTA is a quick and easy way to review and evaluate papers. Things like these are embedded in culture and take time to change.”


Building an End-to-End Open-Source Modern Data Platform

The good news is that the serverless option is exactly what we’re looking for at this stage, even if the product isn’t open-source. That’s because we want something that can scale in terms of storage and query performance without necessitating dedicated maintenance efforts. And so the ideal option when getting started is a serverless managed offering — this is true for all of our components that necessitate elasticity, not just the data warehouse. ... And so it would make sense for us to leverage BigQuery as our data warehouse for this platform, but this doesn’t generalize the choice since in other scenarios it may be more interesting to opt for another option. When picking your data warehouse, you should take into account factors like pricing, scalability, and performance and then pick the option that fits your use case the best. ... In an ELT architecture, the data warehouse is used to store all of our data layers. This means that we won’t just use it to store the data or query it for analytical use cases, but we’ll also leverage it as our execution engine for the different transformations.


Research Opens the Door to Fully Light-Based Quantum Computing

A team of researchers with Japan's NTT Corporation, the Tokyo University, and the RIKEN research center have announced the development of a full photonics-based approach to quantum computing. Taking advantage of the quantum properties of squeezed light sources, the researchers expect their work to pave the road towards faster and easier deployments of quantum computing systems, avoiding many practical and scaling pitfalls of other approaches. Furthermore, the team is confident their research can lead towards the development of rack-sized, large-scale quantum computing systems that are mostly maintenance-free. The light-based approach in itself brings many advantages compared to traditional quantum computing architectures, which can be based on a number of approaches (trapped ions, silicon quantum dots, and topological superconductors, just to name a few). However, all of these approaches are somewhat limited from a physics perspective: they all need to employ electronic circuits, which leads to Ohmic heating (the waste heat that results from electrical signals' trips through resistive semiconductor wiring).


5 Cybersecurity Trends to Watch in 2022

Both state and national laws protecting consumer privacy are expected in 2022 by Trevor Hughes, president and CEO of the International Association of Privacy Professionals (IAPP). “The trendlines for privacy that formed in 2021 will accelerate and will bring new risks and complexity for organizations,” Hughes explained. “More national laws will be passed. More state laws will be passed. More (and heftier) enforcement will occur.” The trade-off for business is that privacy protections will be something that end users are more concerned about. ... “Social engineering will continue to work pretty dang well,” Stairwell’s Mike Wiacek said about 2022. “Social engineering is one of the most difficult security issues to address because no compliance, governance or risk-management action can address the fact that people are imperfect and susceptible to being duped.” ... Email will be increasingly targeted in 2022 with targeted, high-quality spear-phishing attempts, and will require a change in defense tactics, according to Troy Gill, senior manager of threat intelligence with Zix | App River.


6 things in cybersecurity we didn’t know last year

It’s no secret that tech companies are some of the biggest holders of user data, and — less surprisingly — a frequent target of government data requests that seek information for criminal investigations. But Microsoft this year warned of the growing trend of the government attaching secrecy orders to search warrants, gagging the company from telling its users when their data is subject to an investigation. Microsoft said one-third of all legal orders come with secrecy provisions, many of which are “unsupported by any meaningful legal or factual analysis,” according to the company’s consumer security chief Tom Burt. Microsoft said secrecy orders were endemic across the entire tech industry. In April, the FBI launched a first-of-its-kind operation to remove backdoors in hundreds of U.S. company email servers left behind by hackers weeks earlier. China was ultimately blamed for the mass exploitation of vulnerabilities in Microsoft’s Exchange email software, which the hackers used to attack thousands of company email servers around the U.S. to steal contact lists and mailboxes.


Building an AI-Powered Content Collaboration Platform on Azure

Critical to any model’s success is the dataset’s quality. Instead of relying on ready-made data collections, we employ a manual process for Reflexion.ai to replicate real-world use cases better. We also have firm guidelines in place to ensure that the collected data is usable. This data requirement sparked the need for a validation exercise to help weed out abnormalities detected early in the process. This exercise helps us avoid backtracking later when the model fails accuracy tests. Since manually validating every data point is tedious, we explored options to automate the process. We realized Azure Machine Learning could help. Azure Machine Learning helps us develop scripts to automate initial dataset checks. We also benefit from the collaboration of notebooks and datasets, making it easier for multiple developers to work parallelly. This workflow assures us that the dataset is always in perfect condition, allowing our developers to focus on other aspects of the operation whenever a need for optimization arises. Data collection is an inherently iterative process, and in turn, the datasets are constantly evolving. 


Making a Simple Data Pipeline Part 4: CI/CD with GitHub Actions

I/CD is an abbreviation for Continuous Integration/Continuous Deployment (or Continuous Delivery). Much ink has been spilled defining the scope of each of these terms. We will prioritize basic concepts over comprehensive coverage, and show how we can apply those basics to our simple pipeline. Continuous Integration (CI) refers to using automation to frequently build, test, and merge code changes to a branch in a shared repository. The basic motivation behind CI is pursuing faster developer iteration and deployment of changes compared to a process with larger, infrequent integration events where many changes, often from multiple developers, are de-conflicted and tested at the same time. Continuous Deployment (CD) refers to using automation to frequently redeploy a software project. The basic motivation behind CD is freeing operations teams from executing time-consuming and error-prone manual deployment processes while getting changes out to users quickly. For our batch processing pipeline, deployment simply means re-running the pipeline to update the database when changes are pushed to main. 


Data will continue to move to the edge in 2022

The ultimate edge location, though, will continue to be in the phones and laptops. Web app developers continue to leverage the power of browser-based storage while exploring more efficient ways to distribute software. WebASM is an emerging standard that can bring more powerful software to handsets and desktops without complicated installation or permissioning. Computer scientists are also working at a theoretical level by redesigning their algorithms to be distributed to local machines. IBM, for instance, is building AI algorithms that can split the jobs up so the data does not need to move. When they’re applied to data collected by handsets or other IoT devices, they can learn and adapt while synchronizing only essential data. This distributed buzzword is also more commonly found in debates about control. While the push by some to create a distributed web, sometimes called Web3, is driven more by political debates about power than practical concerns about latency, the movement is in the same general direction. 


The Architecture of a Web 3.0 application

Unlike Web 2.0 applications like Medium, Web 3.0 eliminates the middle man. There’s no centralized database that stores the application state, and there’s no centralized web server where the backend logic resides. Instead, you can leverage blockchain to build apps on a decentralized state machine that’s maintained by anonymous nodes on the internet. By “state machine,” I mean a machine that maintains some given program state and future states allowed on that machine. Blockchains are state machines that are instantiated with some genesis state and have very strict rules (i.e., consensus) that define how that state can transition. Better yet, no single entity controls this decentralized state machine — it is collectively maintained by everyone in the network. And what about a backend server? Instead of how Medium’s backend was controlled, in Web 3.0 you can write smart contracts that define the logic of your applications and deploy them onto the decentralized state machine. This means that every person who wants to build a blockchain application deploys their code on this shared state machine.



Quote for the day:

"My philosophy of leadership is to surround myself with good people who have ability, judgment and knowledge, but above all, a passion for service." -- Sonny Perdue

Daily Tech Digest - December 29, 2021

Hackers Are Getting Better and Better At Defeating Your 2FA Security

Two-factor authentication, or 2FA, has been sold to web users as one of the most important and trustworthy tools for securing your digital life. You probably know how it works: By supplying an account with not just your password but also a secondary piece of information (typically an automated code texted to your phone or device of choice), companies can verify that whoever signs into your account is definitely you and not just some goon who’s managed to get their hands on your personal information. However, according to new research, said goons have unfortunately found a number of effective ways to get around your 2FA protections—and they’re using these methods more and more. The study, put out by academic researchers with Stony Brook University and cybersecurity firm Palo Alto Networks, shows the recent discovery of phishing toolkits that are being used to sneak past authentication protections. Toolkits are malicious software programs that are designed to aid in cyberattacks. 


5 HR Trends You will be Paying Attention in 2022

Digitisation has been a hot topic in the last few years. We are now seeing rapid adoption of HR technology right from “Hire to Retire”. Automation in HR has now become a reality that has nearly touched all aspects of human resource management. As we move forward, the expectation would be to have all HR/Employee services on the palms of an Employee. Digitisation would enable to overcome HR challenges and improve business efficiency and workplace productivity. While most of the things will move to Digital, many elements in HR (of being humane) cannot be replaced. Teams will become more complex, and employees would therefore need more human attention and empathy. AI will continue to help Organisation in a lot of smart ways, but when it comes to physical and mental well-being concerns, HR will have to continue to tackle employee issues more emphatically. Retaining the talent was always one of the key priorities but now the focus would be more to keep employees happy and willing to stick around. Covid outbreak has forced the workforce to reprioritise their career and choose employers/jobs which fits into their new priorities.


The tech trends of next year: Innovations that will reshape business in the future

Automation has been with us for centuries, but its application in the digital age is something entirely new. We're seeing it creep into almost every aspect of our lives, with far-reaching and never-before-seen results. Take, for instance, a sporting event such as the 2021 Formula 1 Heineken Dutch Grand Prix, where fans could order tickets, procure refreshments, secure purchases and get all their questions answered through a chatbot that runs through WhatsApp. With the help of data reporting and features such as suggested answers, internal collaboration and skill-based routing, the Dutch Grand Prix could interact with fans efficiently and effectively. This was made possible by deploying a customer data platform (CDP) to manage event pre-registration data, and using marketing cloud software to introduce attendee reductions due to the pandemic. The technology helped create a seamless experience for both event organisers and fans. While this kind of technological automation has been focused around improving customer experience, it has the potential to impact employee experience (EX) as well.


Importance of Data Discovery in Data Mesh Architecture

The full notion of data mesh is the recognition that how we model, produce and consume data is decoupled. With decoupled data, the common concern is If users need to access data or services that they didn’t create, how are they going to find it and learn to use it? It is this part of the data mesh that affects the data discovery the most. Data mesh splits up the centralized data into data domains and allows users to apply high-quality data product thinking to how data is shared. Data discovery is essentially a capability of enabling data and control plane on the data mesh, which creates a better environment for discovering and tagging data. The companies that already have a data mesh model need a data discovery platform initially to discover and understand their data which is where discovery starts with data mesh. Then, as the data teams start owning their data by putting tags and ownership, data mesh allows these teams to invite other users through democratized access to data while maintaining full governance and control over a source of truth with distributed ownership — this is the main intersection of discovery and its role in data mesh.


22 Habits to Become a Better Data Scientist in 2022

It doesn't matter how skillful you’re at coding or creating models, if you can’t understand the business you’re in, you won’t succeed as a data scientist. No one is going to value your work if you don’t help the company you work for reach its goal — no matter how good your model performs. Do some research to know more about the company you work for, the industry they’re in, and take some time to think how a data scientist like you can help reach some of the goals the company has. Asking questions to your boss and colleagues helps a lot too. They can help better understand your role in the company. Every year a new technology rises, tools get obsolete and code becomes deprecated. This is why you should always keep an eye on the new stuff in the field by reading blogs, research papers, and books. Staying up to date will help you take advantage of the latest advancements in data science. As a result, you will be one step ahead of the rest and in most cases get the best performance possible in a project. Being open to learning how to use new tools and adapting to change is a habit that will help you in your career (this leads us to our next habit)


Why DevSecOps offers the ‘most transformative’ approach to application security

Clearly, DevSecOps requires a high degree of trust between the developer and security sides of the organization, according to Dooley. In part, that’s because DevSecOps is ultimately best delivered through automating security as much as possible during app development. For getting to a true DevSecOps program, security teams must start by providing data to developers that is presented in the form in which they operate—which for many DevOps teams is through a Jira ticket, Dooley said. “Show up in the packaging and format that they’re used to, and supply them with all the information that they need to do to just treat [security issues] like a bug or like a feature,” he said. Thus, the first level on the journey to DevSecOps can involve supplying developers with a secure code sample that fixes a certain issue in the code, Dooley said. But this secure code still needs to be implemented manually. At the next level, companies can enable semi-automated remediation, he said. This can involve automatically disabling issues that are creating a security exposure. With this approach, a human still has to sign off on the final build.


5 movies every HR professional must watch in 2022

The 2009 Oscar winning sports drama starring Sandra Bullocks was released at a time when racial bias and discrimination was quite prevalent in the world. This heart-rending story is about a seventeen year old African American foster child who is taken into care by a rich southern family whose matriarch found something precious inside him that no one could - a striving will for excellence. What HR professionals can learn from the movie - This movie is a classic example of the ‘nature vs nurture’ discussion. Sandra Bullock’s Oscar winning character practices what she preaches i.e equality, care and sensitivity beyond the societal prejudice and bias and utilising it to unleash the potential of the person. Earlier, HR leaders may tend to overlook the smallest details of human personalities but in the new world of work, these nuances have become quite integral in managing your workforce and motivating them forward. It is important to understand every contributor is bringing something to the table and just like Leigh Tuohy from the movie, it is really important for the HR leaders to distinguish the difference in experience of their employees and nurture them to their strengths.


When employees leave, is your data walking out the door?

Everything from customer lists to intellectual property is in danger of being exfiltrated and for a variety of reasons. It may be that employees feel entitled to the intellectual property they helped create. It may be that they want to jump start their work with a new company and that taking data, in their minds, is no crime. It may be that they’re disgruntled and looking to do harm by selling the data to a competitor. In the most extreme case, the employees in question may even be cybercriminals-for-hire who infiltrate enterprises with the sole purpose of causing harm. Amid all these possibilities, it is no longer reasonable for organizations to risk leaving the protection of their most valuable data up to chance. Organizations with valuable intangible assets must consider performing digital forensic scans as part of their standard operating procedures when offboarding employees, to identify threat actors and the data they’re exfiltrating. Doing so would enable organizations to better manage the risk of insider activity and give them the means to recover critical data and in some instances even seek legal recourse.


CISOs will prioritise secure automation in 2022

External threats are not the only problem. The disruption of the past 18 months has seen many organisations realise that their strategic security posture was behind the curve when the request for remote work became a requirement. For example, research conducted during the summer of 2020 found that the biggest threat noted by security departments was the inability to implement multi-factor authentication to facilitate secure remote worker access, something that should have been in place already. However, with resources stretched to breaking point by the immediate demands of the almost exclusively remote workforce, many organisations had to leave strategic security projects on the back burner. Now, as they urgently try to catch up, security teams are also facing an industry-wide shortage of talent. The latest data from the 2021 (ISC)2 Cybersecurity Workforce Study estimates that an additional 700,000 professionals have joined the cyber security sector, but that the gap between the number of additional professionals needed to adequately defend organisations and the number currently available stands at 2.7 million. 


Best practices for developing governable AI

Due to the ever-evolving landscape of open source libraries, vendors, and approaches to building ML models as well as the shortage of qualified ML engineers, there is a significant lack of industry best practices for creating deployable, maintainable, and governable ML models. When developing ML models with governance in mind, the most important considerations are reperformance, version control, interpretability, and ease of deployment and maintainability. ... Reperformance is the ability to reperform or reproduce a transaction or a model training and obtain identical results. Much has been said about the “reproducibility crisis” in science, and the AI/ML community is not immune from this criticism. Creating machine learning systems that are reproducible is definitely possible, and putting in the effort up front to do so ultimately yields more robust, dependable deployments, fewer headaches for developers, and fewer questions from auditors and other reviewers.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - December 28, 2021

What app developers need to do now to fight Log4j exploits

Your first line of defense is to upgrade to the most current Log4j versions. Initially, Apache released a patch that turned out to still have vulnerabilities. The most current versions are Log4j v.2.17.0, if you are running Java 8 or later, and Log4j v.2.12.2, if you are running Java 7 across your web app infrastructure. These turn off JNDI by default and remove message lookup access, both of which lie at the heart of the various vulnerabilities. Disabling JNDI could break something in your apps, so test this carefully before you implement in any production systems. You might also want to stop any Java-based logging if you don’t need it for any of your applications. Again, test before deploying. And those of you who run your own Minecraft server should check and see if it is running Minecraft v.1.8.8 or later; these versions are vulnerable. Microsoft has released Minecraft v.1.18.1, which fixes the issue. You should upgrade immediately or find another and more trustworthy server that has been fixed. Security vendors have worked overtime to augment their tools, and you should take advantage of various free offers.


Performance and scalability of blockchain networks to be key focus areas in 2022

There are many factors influencing the performance of blockchain technology. First, the choice of consensus mechanism is highly important as this protocol or algorithm is responsible for striking a fine balance between the degree of decentralization, scalability, and security. Another key factor is the network latency as the strength of the dedicated bandwidth will play a vital role in broadcasting the transaction to all the nodes and help collate their responses. Similarly, node infrastructure is also a deciding factor. It is important to allocate adequate input-output operations per second (IOPS). Also, the number of nodes, smart contracts, transaction payload size, transaction pooling, and local storage are vital factors influencing the performance. The key to improving the performance and scalability is in selecting the right platform for meeting our performance goals. There are many options available in the market. The industry is constantly exploring divergent solutions to improve scalability and performance. 

5 Internet Technology Predictions for 2022

At this point in time, December 2021, it’s unclear whether the crypto market — now known as “Web3” — is at a market peak equivalent to 1999, or whether it’s at the very beginning of its run like the web in 1993. Either way, I’m predicting a market correction in 2022. Here’s my reasoning: My main critique of Web3 currently is that nothing useful has been built using crypto and blockchains, other than tools for speculation like crypto exchanges and NFT marketplaces. The technical infrastructure of Web3 is both flawed and also not as decentralized as many crypto proponents claim. On the other hand, this same argument could be used to prop up the 1993 comparison — when the web was also immature and not ready for the mainstream. But given the lack of viable products in Web3, my contention is that the value of this market is wildly inflated right now. Remember that the first wave of Dot Com IPOs, starting with Netscape, didn’t kick off until the second half of 1995. That was a point when web platforms were fast maturing, and had attracted attention (and intense competition) from big tech companies like Microsoft and Oracle.


GOP Senator to Introduce 'Comprehensive' Crypto Regs Bill

A Republican senator will soon introduce a bill that, for the first time, attempts to regulate the cryptocurrency space. The bill would reportedly add investor protections, rein in stablecoins, which are pegged to fiat currency, and create a self-regulatory body under the jurisdiction of the U.S. Securities and Exchange Commission and its sister agency, the Commodity Futures Trading Commission. The proposal, first reported by Bloomberg, stems from Sen. Cynthia Lummis, R-Wyo., a longtime crypto-evangelist and one of two U.S. senators to have reportedly invested in virtual currency. Her cryptoassets reportedly total a quarter of a million dollars. In legislation she plans to introduce in early 2022, Lummis intends to provide regulatory clarity on stablecoins - long the subject of congressional debate over concerns around risks and liquidity - and define the different asset classes, while introducing additional protections to insulate investors against substantial losses, scams and potentially lax cybersecurity.


Developments that will define data governance and operational security in 2022

Rapid cloud-based adoption and disruptive business models have led Unicorns to experience unprecedented growth in revenue and customer acquisition – especially within the fields of Fintech, Healthtech and internet services. Data operations have scaled up to meet demand, however, data security hasn’t kept pace. A prime example of this is the data breach at Robinhood, in which an unknown third party used social engineering to glean information from a customer service representative over the phone. The bad-faith actor was able to gain access to sensitive customer support data, ultimately affecting over five million customers. Clearly, the customer support employee was over-privileged, meaning they had access to more data than was necessary for them to do their job effectively. Startups, especially those experiencing rapid growth, such as Robinhood, often start off with trust-based data access policies, where employees are given broad access to data, which initially fuels faster decision making.


What is a Quantum Convolutional Neural Network?

One of the most popular applications of CNN is in the field of image classification. In terms of superposition and parallel computation, quantum computers offer significant advantages. Quantum Convolutional Neural Network improves CNN performance by incorporating quantum environments. In this section, we’ll look at how the QCNN can help with image classification. The quantum convolution layer is a layer in a quantum system that behaves like a convolution layer. To obtain feature maps composed of new data, the quantum convolution layer applies a filter to the input feature map. Unlike the convolution layer, the quantum convolution layer uses a quantum computing environment for filtering. Quantum computers offer superposition and parallel computation, which are not available in classical computing and can reduce learning and evaluation time. Existing quantum computers, on the other hand, are still limited to small quantum systems. 


Guide to Polkadot blockchain app development and deployment

The custom blockchains built on top of Polkadot, also known as parachains, can carry out interchain transactions using the shared security provided by Polkadot in a trust-minimized way. This protocol gives birth to an interconnected internet of blockchains. ... Building a blockchain application can demand a lot of resources. A blockchain project can have good features and real-world value, but without the right resources to fund its development, it won’t get off the ground. Substrate is a generic blockchain framework. As such, it provides plug-and-play components for building blockchain applications and networks. Its modular and pluggable libraries enable you to implement signatures, consensus mechanisms, and other parts of the chain as desired. At the core of Substrate is its composable nature to drive customization while building an application on the Polkadot network. Polkadot as a network provides the Substrate chains with the protocols to send messages across each other. 


In 2022, security will be Linux and open-source developers job number one

Behind the log4j mess is another problem, That's "How do you know what open-source components your software is using?" For example, log4j2 has been in use since 2014. You can't expect anyone to remember if they used that first version in some program you're still using today. The answer is one that the open-source community started taking seriously in recent years: The creation of Software Bills of Material (SBOM). An SBOM spells out exactly what software libraries, routines, and other code has been used in any program. Armed with this, you can examine what component versions are used in your program. As David A. Wheeler, the Linux Foundation's Director of Open Source Supply Chain Security, has explained, by using SBOMs and verified reproducible builds, you can make sure you know what's what in your programs. That way, if a security hole is found in a component, you can simply patch it rather than search like a maniac for any possible problem code before being able to fix it.


Will Autonomous Vehicle Makers Get Back into Gear in 2022?

The biggest challenge for vehicle manufacturers and their technology partners is developing models that can deliver a true autonomous driving experience. Within the AV industry, full autonomy is referred to as Level 5 Advanced Driver Assistance Systems (ADAS). “At Level 5, there is no human intervention required and the vehicle is fully capable of driving itself,” says Matt Desmond, automotive principal industry analyst at business advisory firm Capgemini Americas. None of the AVs marketed to be sold in the next few years will incorporate Level 5 ADAS. “Delivering a truly autonomous vehicle—without steering wheel, accelerator, or brakes—is a steep technological and safety challenge, and there are many significant hurdles to achieving fully autonomous solutions,” Desmond says. In the meantime, leading vehicle manufacturers and technology firms are investing massive sums in developing, testing, and refining AV systems in an effort to mitigate technical issues and deliver a robust technology foundation, he notes. As things currently stand, Level 5 ADAS vehicles may not reach market for at least several years. 


The physical office is dead (long live the office)

Rather than considering what offices should look like in the future, it's worth asking the deeper question of whether we need offices at all for most knowledge workers. The office originated based on the simple advantages that bringing humans physically together engendered. If your choice was to send a telegram across the country and wait days for a response, consolidating critical employees in the same office created a significant competitive advantage. During the dawn of the corporation, there was simply no reasonable alternative to having people in the same space in order to communicate effectively and efficiently. While we've had decent and effective remote working technologies for years before COVID, the balance of power in a working situation seems to reside with the majority. When the majority were in the same location, remote workers missed key interactions and were generally less effective than in-office counterparts. With the majority of knowledge workers now remote, those in a physical location are forced to act like remote workers.



Quote for the day:

"Listening to the inner voice & trusting the inner voice is one of the most important lessons of leadership." -- Warren Bennis

Daily Tech Digest - December 27, 2021

Defining the web3 stack

There are a lot of differences in the way that we interact with and build on top of blockchains versus databases in the traditional tech stack. With blockchains, data isn’t stored in a format that can efficiently or easily be consumed directly from other applications or front ends. Blockchains are optimized for write operations. You often hear the innovation happening centered around transactions per second, block time, and transaction cost. Blockchain data is written in blocks over the course of time, making anything other than basic read operations impossible. In most applications, you need features like relational data, sorting, filtering, full text search, pagination and many other types of querying capabilities. In order to do this, data needs to be indexed and organized for efficient retrieval. Traditionally, that’s the work that databases do in the centralized tech stack, but that indexing layer was missing in the web3 stack. The Graph is a protocol for indexing and querying blockchain data that makes this process much easier and offers a decentralized solution for doing so. Anyone can build and publish open GraphQL APIs, called subgraphs, making blockchain data easy to query.


The rise and rebirth of P2P file sharing – and why it will matter in 2022

The Internet itself was designed from the very beginning to be decentralised. The fact that most of our data and computing ended up being centralised on Big Tech’s cloud platforms is something of an evolutionary fluke. To understand why, it is helpful to have some historical context. We can think of the evolution of the Internet as being divided into four distinct phases. The First Internet (the “Al Gore Internet”) was about connecting computers together over a global network for the first time. The Second Internet was about getting people and businesses online and conducting commerce digitally for the first time. The Third Internet, the one we’re still in now, was all about mobile computing. The scale challenges created by the sudden demand for smartphones and their apps created intense economic pressure to centralise. Housing millions of similar servers in gigantic data centres was and is more economically efficient, and technically “good enough” for the mobile apps of the day.


Former Uber CSO Faces New Charge for Alleged Breach Cover-Up

The government's case hinges on Sullivan's alleged failure to report a 2016 data breach to authorities, which prosecutors accuse him of having mischaracterized as a less severe security incident, together with his allegedly paying hackers "hush money" to conceal the breach. When hackers emailed Uber in November 2016 to inform it of a breach they had perpetrated, the ride-sharing service was in the process of providing detailed answers to the Federal Trade Commission, stemming from a September 2014 breach. Prosecutors say that as the person nominated by Uber to provide sworn testimony to the FTC, Sullivan should have immediately disclosed the breach to the FTC and - in accordance with California's data breach notification law - Uber should have notified all affected state residents that their personal details had been obtained by attackers. "Institutions that store personal information of others must comply with the law," says Stephanie M. Hinds, the acting U.S. Attorney for the Northern District of California, where Sullivan formerly served as a federal prosecutor.


Council Post: Reflecting On The Cost Of Our Dream To Build The Coveted General AI

The powerful language models and the newer zero-shot text-to-image generation models are all marching really fast towards the intended goal of performing tasks for which they were not trained. Each one is outwitting the previous one for its applications and uses. DeepMind, one of the most well known AI research institutes (owned by Alphabet), has centred its ultimate goal at achieving AGI. Interestingly, this year, the lab published a paper titled ‘Reward Is Enough‘, where the authors suggested that techniques like reward maximisation can help machines develop behaviour that exhibits abilities associated with intelligence. They further concluded that reward maximisation and reinforcement learning, in extension, can help achieve artificial general intelligence. Let’s take a closer look at GPT-3 created by DeepMind’s closest competitor, OpenAI, which created a major buzz in the scientific community. It was widely considered a massive breakthrough when achieving General AI.


How AI, VR, AR, 5G, and blockchain may converge to power the metaverse

Several of the technologies that will enable the metaverse, including virtual and augmented reality and blockchain, have been slow to mature but are approaching a level of capability that is critical for success. Each has been missing the killer app that will drive development and widespread adoption forward. The metaverse could be that app. For VR, most headsets still need to be tethered to a PC or gaming console to achieve the processing power and communication speed required for smooth and immersive experiences. Only Meta’s Oculus Quest 2 has so far broken free of this cable constraint. But even that headset remains bulky, according to one of Meta’s VPs. With ever faster processors and higher speed wireless communications on the near horizon, better visual resolution and untethered experiences should emerge over the next few years. AR has achieved mostly niche adoption. In part, AR prospects likely suffered due to the high-profile market failure of Google Glass when introduced in 2012. 


The Dark Web Has Its Own People's Court

Threat actors operating in large underground forums often are quick to comply with underground court decisions because they want to protect their reputations. "Criminals work hard to build their reputation on these forums," DiMaggio says. "These forums are where ransomware affiliate recruiting takes place as well as malware sales, breach, and exploit access, and even hacking services are offered." Losing trust or getting banned from a forum can have a huge negative impact on a threat actor's ability to operate in the cyber underground, he says. In some extreme cases, threat actors have exposed the true identities of the cybercriminals — including physical address, social media profiles, and phone numbers — that might have scammed them, Analyst1 said. John Hammond, senior security researcher at Huntress, says practically every cybercrime forum or bulletin board has a sort of judicial system, or a "people's court" for handling disputes among criminals. "It's a strange sort of sportsmanship or code of conduct, where hackers, thieves, and scammers should not cross each other," he says.


Digital transformation: 4 tips to be a successful IT leader in 2022

Digital transformation can be overwhelming. You may be tempted to hand over the reins to a consulting service that has a track record of success implementing these new initiatives. It is important to know how to determine if this is the right option for your organization. Digital transformation often requires disruptive, cross-organizational change. Bringing in an outside expert can be the right answer. A transformation expert can help smooth transitions and have previous success to show as evidence for why the change is beneficial. If you do decide to bring in an outside partner, make sure that the team has subject matter experts familiar with your vertical. When vetting consulting partners, choose a team that is committed to learning the history of your business, the processes you have in place, and even your internal relationships. Bringing on a partner that is interested in integrating into your organization while the transformation happens can reduce friction and increase your likelihood of success.


The Secret of Delivering Machine Learning to Production

87% of ML projects are eventually not delivered to production (VB). This article from 2019 is cited in almost every MLOps startup pitch deck, and this number is well-established in the ML discourse. To be totally honest, I have tried to trace back this number and figure out how it was retrieved — and didn’t find any reliable source or research to support it. However, this number seems quite reasonable if you also consider projects that were stopped at an early stage of PoC. The more painful number is the relative amount of projects that were already committed to the management or even to customers, in which significant efforts have already been invested — that were terminated before (or after) hitting production. In my previous post: “how to run due diligence for ML teams”, I give a high-level overview of the ingredients of successful ML teams. Here, you can find some practical advice on how to build high-impact ML teams. Companies invest millions in ML teams, employing top talent and paying accordingly, and the question hanging over this is: “does it worth it?”


IoT predictions for 2022: the what, why and how of the year ahead

Internet everywhere is gaining traction, as satellite narrowband IoT is poised to plug the gap between terrestrial mobile connectivity services and non-cellular IoT networks. At present, ground-based connectivity services are far from ubiquitous, with vast tranches of the Earth’s surface not covered by cellular networks. This presents a colossal opportunity for satellite IoT, which solves the problem of mobile connectivity in remote locations. Potential applications include asset tracking, oil and gas industries, utilities, manufacturing and construction. Although Starlink is probably the most prominent exponent of satellite IoT, there’s a growing number of companies lining up to harness this new technology. 2022 could be a watershed year for the tech, helping to spawn new sectors and bring niche markets towards the mainstream. Digital twins are nothing new, but the terminology is beginning to cut through into everyday tech language. Being able to interpret physical objects in digital form with overlaid data to create a digital twin allows for highly complex simulations. 


How Space Shapes Collaboration: Using Anthropology to Break Silos

In general, office space or architecture must be considered one of the aspects that can improve or hinder collaboration, but it is not the only one. Ideally, there should be a good mix of rituals, spaces, and rules that make people's work easier and make them feel connected to their co-workers. Another wonderful example is a ritual from a London marketing agency. Every Friday a different person on the staff pushed a trolley filled with wine and beer offering one glass of something to each employee. The person had to introduce themselves if they were new and ask the colleague how their week had gone. From the boss to the doorman, everyone had to push the trolley at some point. ... company culture and framework can significantly lower barriers, even in virtual space. Companies like Accenture, Cisco or Lego work with Scaled Agile (SAFe). This framework actively facilitates collaboration, at the team level, at the role level, and also between colleagues on different teams. 



Quote for the day:

"Remember teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni

Daily Tech Digest - December 26, 2021

All You Need to Know About Unsupervised Reinforcement Learning

Unsupervised learning can be considered as the approach to learning from the huge amount of unannotated data and reinforcement learning can be considered as the approach to learning from the very low amount of data. A combination of these learning methods can be considered as unsupervised reinforcement learning which is basically a betterment in reinforcement learning. In this article, we are going to discuss unsupervised Reinforcement learning in detail along with special features and application areas. ... When we talk about the basic process followed by unsupervised learning, we define objective functions on it such that the process can be capable of performing categorization of the unannotated data or unlabeled data. There are various problems that can be dealt with using unsupervised learning. Some of them are as follows: Label creation, annotation, and maintenance is a changing discipline that also requires a lot of time and effort; Many domains require expertise in annotation like law, medicine, ethics, etc; In reinforcement learning, reward annotation is also confusing. 


Explainable AI (XAI) Methods Part 1 — Partial Dependence Plot (PDP)

Partial Dependence (PD) is a global and model-agnostic XAI method. Global methods give a comprehensive explanation on the entire data set, describing the impact of feature(s) on the target variable in the context of the overall data. Local methods, on the other hand, describes the impact of feature(s) on an observation level. Model-agnostic means that the method can be applied to any algorithm or model. Simply put, PDP shows the marginal effect or contribution of individual feature(s) to the predictive value of your black box model ... Unfortunately, PDP is not some magic wand that you can waver in any occasion. It has a major assumption that is made. The so-called assumption of independence is the biggest issue with PD plots. ... “If the feature for which you computed the PDP is not correlated with the other features, then the PDPs perfectly represent how the feature influences the prediction on average. In the uncorrelated case, the interpretation is clear: The partial dependence plot shows how the average prediction in your dataset changes when the j-th feature is changed.”


Worried about super-intelligent machines? They are already here

For anyone who thinks that living in a world dominated by super-intelligent machines is a “not in my lifetime” prospect, here’s a salutary thought: we already live in such a world! The AIs in question are called corporations. They are definitely super-intelligent, in that the collective IQ of the humans they employ dwarfs that of ordinary people and, indeed, often of governments. They have immense wealth and resources. Their lifespans greatly exceed that of mere humans. And they exist to achieve one overriding objective: to increase and thereby maximise shareholder value. In order to achieve that they will relentlessly do whatever it takes, regardless of ethical considerations, collateral damage to society, democracy or the planet. One such super-intelligent machine is called Facebook. ... “We connect people. Period. That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we have to do to bring more communication in. The work we will likely have to do in China some day. All of it.”


Supervised vs. Unsupervised vs. Reinforcement Learning: What’s the Difference?

Reinforcement learning is a technique that provides training feedback using a reward mechanism. The learning process occurs as a machine, or Agent, that interacts with an environment and tries a variety of methods to reach an outcome. The Agent is rewarded or punished when it reaches a desirable or undesirable State. The Agent learns which states lead to good outcomes and which are disastrous and must be avoided. Success is measured with a score (denoted as Q, thus reinforcement learning is sometimes called Q-learning) so that the Agent can iteratively learn to achieve a higher score. Reinforcement learning can be applied to the control of a simple machine like a car driving down a winding road. The Agent would observe its current State by taking measurements such as current speed, direction relative to the road, and distances to the sides of the road. The Agent can take actions that change its state like turning the wheel or applying the gas or brakes.


Quantum computers: Eight ways quantum computing is going to change the world

Discovering new drugs takes so long: scientists mostly adopt a trial-and-error approach, in which they test thousands of molecules against a target disease in the hope that a successful match will eventually be found. Quantum computers, however, have the potential to one day resolve the molecular simulation problem in minutes. The systems are designed to be able to carry out many calculations at the same time, meaning that they could seamlessly simulate all of the most complex interactions between particles that make up molecules, enabling scientists to rapidly identify candidates for successful drugs. This would mean that life-saving drugs, which currently take an average 10 years to reach the market, could be designed faster -- and much more cost-efficiently. Pharmaceutical companies are paying attention: earlier this year, healthcare giant Roche announced a partnership with Cambridge Quantum Computing (CQC) to support efforts in research tackling Alzheimer's disease.


What is a honeypot crypto scam and how to spot it?

Even though it looks like a part of the network, it is isolated and monitored. Because legitimate users have no motive to access a honeypot, all attempts to communicate with it are regarded as hostile. Honeypots are frequently deployed in a network's demilitarized zone (DMZ). This strategy separates it from the leading production network while keeping it connected. A honeypot in the DMZ may be monitored from afar while attackers access it, reducing the danger of a compromised main network. To detect attempts to infiltrate the internal network, honeypots can be placed outside the external firewall, facing the internet. The actual location of the honeypot depends on how intricate it is, the type of traffic it wants to attract and how close it is to critical business resources. It will always be isolated from the production environment, regardless of where it is placed. Logging and viewing honeypot activity provides insight into the degree and sorts of threats that a network infrastructure confronts while diverting attackers' attention away from real-world assets.


From DeFi to NFTs to metaverse, digital assets revolution is remaking the world

This decentralised concept offers both opportunities and challenges. How could a system work among a group of participants—there could be bad apples—if they were given the option of pseudonymity? Who will update the ledger? How will we reach a uniform version of truth? Bitcoin solved a lot of the long-standing issues with cryptographic consensus methods with a combination of private and public keys, and carefully aligned economic incentives. Suppose User A wants to transfer 1 bitcoin to User B. The transaction data would be authenticated, verified, and moved to the ‘mempool’ (memory pool is a holding room for all unconfirmed transactions), where they will be collected in groups or ‘blocks’. One block becomes one entry in the Bitcoin ledger, and around 3,000 transactions will appear in one block. The ledger would be updated every 10 minutes, and the system would converge on the latest single version of truth. The next big question is, who in the system gets to write the next entry in the ledger? That is where the consensus protocol comes into play.


The privacy dangers of web3 and DeFi – and the projects trying to fix them

Less discussed is the impact of web3 and DeFi on user privacy. Proponents argue that web3 will improve user privacy by putting individuals in control of their data, via distributed personal data stores. But critics say that the transparent nature of public distributed ledgers, which make transactions visible to all participants, is antithetical to privacy. “Right now, web3 requires you to give up privacy entirely,” Tor Bair, co-founder of private blockchain The Secrecy Network, tweeted earlier this year. “NFTs and blockchains are all public-by-default and terrible for ownership and security.” Participants in public blockchains don’t typically need to make their identities known, but researchers have demonstrated how transactions recorded on a blockchain could be linked to individuals. A recent paper by researchers at browser maker Brave and Imperial College London found that many DeFi apps incorporate third-party web services that can access the users’ Ethereum addresses. “We find that several DeFi sites rely on third parties and occasionally even leak your Ethereum address to those third parties – mostly to API and analytics providers,” the researchers wrote.


The Importance of People in Enterprise Architecture

All the employees in the organization should have a shared understanding of the overarching future state and be empowered to update the future state for their part of the whole. The communication and democratization discussed in the AS-IS section is also necessary for TO-BE. People need regular, self-service access to a continuously evolving future-state architecture description. Each person should have access that provides views specific to that person, their role, and links to other people who will collaborate to promote that understanding and evolve the design. Progressive companies are moving away from the plan-build-run mentality and this is changing the role of the architecture review board (ARB) that is operated by a central EA team. These traditionally act as a bureaucratic toll-gate, performing their role after the design is finished to ensure all system qualities are accounted for and the design is aligned with the future state approach. However, democratizing the enterprise architecture role and sharing design autonomy now requires collaboration on the initial phase of design at the start of an increment. This collaboration is to ensure the reasoning behind the enterprise-wide future-state is understood, and the desired system qualities are carefully evaluated.


The Best Way to Manage Unstructured Data Efficiently

A lot of people seem to place a lot of focus on data analysis techniques and machine learning models when building a high-quality ML production pipeline. However, what a lot of people miss is that storage is one of the most important aspects of your pipeline. This is because the pipeline has 3 main components: collecting data, storing it, and consuming it. Effective storage methods do not only boost storage capabilities but also help in more efficient collection and consumption. The ease of searching with customizable metadata is available in object storage and helps in doing both of those. Not only do you want to choose the correct storage tech, but you also want to choose the correct provider. AWS comes to mind as one of the best object storage providers mainly because its infrastructure provides smooth service and ease of scaling. Furthermore, for effective consumption of data, there must be a software layer that runs on top of this storage for data aggregation and collection purposes. This is also an important choice and needs to be discussed in another article dedicated to the topic.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard