Daily Tech Digest - October 31, 2021

Hackers Breach iOS 15, Windows 10, Google Chrome During Massive Cyber Security Onslaught

"The first thing to note is the in-group, out-group divide here," says Sam Curry, the chief security officer at Cybereason. Curry told me that there's a sense that China has the "critical mass and doesn't need to collaborate to innovate in hacking," in what he called a kind of U.S. versus them situation. Curry sees the Tianfu Cup, with the months of preparation that lead up to the almost theatrical on-stage reveal, as a show of force. "This is the cyber equivalent of flying planes over Taiwan," he says, adding the positive being that the exploits will be disclosed to the vendors. There are, of course, lots of positives about a hacking competition, such as the Tianfu Cup or Pwn2Own. "The security researchers involved in these schemes can be an addition to existing security teams and provide additional eyes on an organisation's products," George Papamargaritis, the managed security service director at Obrela Security Industries, says, "meaning bugs will be unearthed and disclosed before cybercriminals get a chance to discover them and exploit them maliciously."


SRE vs. SWE: Similarities and Differences

In general, SREs and SWEs are more different than they are similar. The main difference between the roles boils down to the fact that SREs are responsible first and foremost with maintaining reliability, while SWEs focus on designing software. Of course, those are overlapping roles to a certain extent. SWEs want the applications they design to be reliable, and SREs want the same thing. However, an SWE will typically weigh a variety of additional priorities when designing and writing software, such as the cost of deployment, how long it will take to write the application and how easy the application will be to update and maintain. These aren’t usually key considerations for SREs. The toolsets of SREs and SWEs also diverge in many ways. In addition to testing and observability tools, SREs frequently rely on tools that can perform tasks like chaos engineering. They also need incident response automation platforms, which helps manage the complex processes required to ensure efficient resolution of incidents.


The Biggest Gap in Kubernetes Storage Architecture?

Actually, commercial solutions aren’t better than open source solutions — not inherently anyway. A commercial enterprise storage solution could still be a poor fit for your specific project, require internal expertise, require significant customization, break easily and come with all the drawbacks of an open source solution. The difference is that where an open source solution is all but guaranteed to come with these headaches, a well-designed commercial enterprise storage solution won’t. It isn’t a matter of commercial versus open source, rather it’s good architecture versus bad architecture. Open source solutions aren’t designed from the ground up, making it much more difficult to guarantee an architecture that performs well and ultimately saves money. Commercial storage solutions, however, are. This raises the odds that it will feature an architecture that meets enterprise requirements. Ultimately, all this is to say that commercial storage solutions are a better fit for most Kubernetes users than open source ones, but that doesn’t mean you can skip the evaluation process.


MLOps vs. DevOps: Why data makes it different

All ML projects are software projects. If you peek under the hood of an ML-powered application, these days you will often find a repository of Python code. If you ask an engineer to show how they operate the application in production, they will likely show containers and operational dashboards — not unlike any other software service. Since software engineers manage to build ordinary software without experiencing as much pain as their counterparts in the ML department, it begs the question: Should we just start treating ML projects as software engineering projects as usual, maybe educating ML practitioners about the existing best practices? Let’s start by considering the job of a non-ML software engineer: writing traditional software deals with well-defined, narrowly-scoped inputs, which the engineer can exhaustively and cleanly model in the code. In effect, the engineer designs and builds the world wherein the software operates. In contrast, a defining feature of ML-powered applications is that they are directly exposed to a large amount of messy, real-world data that is too complex to be understood and modeled by hand.


How to Measure the Success of a Recommendation System?

Any predictive model or recommendation systems with no exception rely heavily on data. They make reliable recommendations based on the facts that they have. It’s only natural that the finest recommender systems come from organizations with large volumes of data, such as Google, Amazon, Netflix, or Spotify. To detect commonalities and suggest items, good recommender systems evaluate item data and client behavioral data. Machine learning thrives on data; the more data the system has, the better the results will be. Data is constantly changing, as are user preferences, and your business is constantly changing. That’s a lot of new information. Will your algorithm be able to keep up with the changes? Of course, real-time recommendations based on the most recent data are possible, but they are also more difficult to maintain. Batch processing, on the other hand, is easier to manage but does not reflect recent data changes. The recommender system should continue to improve as time goes on. Machine learning techniques assist the system in “learning” the patterns, but the system still requires instruction to give appropriate results.


An Introduction To Decision Trees and Predictive Analytics

Decision trees represent a connecting series of tests that branch off further and further down until a specific path matches a class or label. They’re kind of like a flowing chart of coin flips, if/else statements, or conditions that when met lead to an end result. Decision trees are incredibly useful for classification problems in machine learning because it allows data scientists to choose specific parameters to define their classifiers. So whether you’re presented with a price cutoff or target KPI value for your data, you have the ability to sort data at multiple levels and create accurate prediction models. ... Each model has its own sets of pros and cons and there are others to explore besides these four examples. Which one would you pick? In my opinion, the Gini model with a maximum depth of 3 gives us the best balance of good performance and highly accurate results. There are definitely situations where the highest accuracy or the fewest total decisions is preferred. As a data scientist, it’s up to you to choose which is more important for your project! ...


Five Ways Blockchain Technology Is Enhancing Cloud Storage

In Blockchain-based cloud storage, information is separated into numerous scrambled fragments, which are interlinked through a hashing capacity. These safe sections are conveyed across the network, and each fragment lives in a decentralized area. There are solid security arrangements like transaction records, encryption through private and public keys, and hashed blocks. It guarantees powerful protection from hackers. Thanks to the sophisticated 256-bit encryption, not even an advanced hacker can decrypt that data. In an impossible instance, suppose a hacker decodes the information. Even in such a situation, every decoding attempt will lead to a small section of information getting unscrambled and not the whole record. The outrageous security arrangements effectively fail all attempts of hackers, and hacking becomes a useless pursuit according to a business perspective. Another significant thing to consider is that the proprietors’ information is not stored on the hub. It assists proprietors to regain their privacy, and there are solid arrangements for load adjusting too.


Data Mesh Vs. Data Fabric: Understanding the Differences

According to Forrester’s Yuhanna, the key difference between the data mesh and the data fabric approach are in how APIs are accessed. “A data mesh is basically an API-driven [solution] for developers, unlike [data] fabric,” Yuhanna said. “[Data fabric] is the opposite of data mesh, where you’re writing code for the APIs to interface. On the other hand, data fabric is low-code, no-code, which means that the API integration is happening inside of the fabric without actually leveraging it directly, as opposed to data mesh.” For James Serra, who is a data platform architecture lead at EY (Earnst and Young) and previously was a big data and data warehousing solution architect at Microsoft, the difference between the two approaches lies in which users are accessing them. “A data fabric and a data mesh both provide an architecture to access data across multiple technologies and platforms, but a data fabric is technology-centric, while a data mesh focuses on organizational change,” Serra writes in a June blog post. “[A] data mesh is more about people and process than architecture, while a data fabric is an architectural approach that tackles the complexity of data and metadata in a smart way that works well together.”


Data Warehouse Automation and the Hybrid, Multi-Cloud

One trend among enterprises that move large, on-premises data warehouses to cloud infrastructure is to break up these systems into smaller units--for example, by subdividing them according to discrete business subject areas and/or practices. IT experts can use a DWA tool to accelerate this task--for example, by sub-dividing a complex enterprise data model into several subject-specific data marts, then using the DWA tool to instantiate these data marts as separate virtual data warehouse instances, or by using a DWA tool to create new tables that encapsulate different kinds of dimensional models and instantiating these in virtual data warehouse instances. In most cases, the DWA tool is able to use the APIs exposed by the PaaS data warehouse service to create a new virtual data warehouse instance or to make changes to an existing one. The tool populates each instance with data, replicates the necessary data engineering jobs and performs the rest of the operations in the migration checklist described above.


Rethinking IoT/OT Security to Mitigate Cyberthreats

We have seen destructive and rapidly spreading ransomware attacks, like NotPetya, cripple manufacturing and port operations around the globe. However, existing IT security solutions cannot solve those problems due to the lack of standardized network protocols for such devices and the inability to certify device-specific products and deploy them without impacting critical operations. So, what exactly is the solution? What do people need to do to resolve the IoT security problem? Working to solve this problem is why Microsoft has joined industry partners to create the Open Source Security Foundation as well as acquired IoT/OT security leader CyberX. This integration between CyberX’s IoT/OT-aware behavioral analytics platform and Azure unlocks the potential of unified security across converged IT and industrial networks. And, as a complement to the embedded, proactive IoT device security of Microsoft Azure Sphere, CyberX IoT/OT provides monitoring and threat detection for devices that have not yet upgraded to Azure Sphere security.



Quote for the day:

"It's hard for me to answer a question from someone who really doesn't care about the answer." -- Charles Grodin

Daily Tech Digest - October 30, 2021

Ransomware Attacks Are Evolving. Your Security Strategy Should, Too

Modern ransomware attacks typically include various tactics like social engineering, email phishing, malicious email links and exploiting vulnerabilities in unpatched software to infiltrate environments and deploy malware. What that means is that there are no days off from maintaining good cyber-hygiene. But there’s another challenge: As an organization’s defense strategies against common threats and attack methods improve, bad actors will adjust their approach to find new points of vulnerability. Thus, threat detection and response require real-time monitoring of various channels and networks, which can feel like a never-ending game of whack-a-mole. So how can organizations ensure they stay one step ahead, if they don’t know where the next attack will target? The only practical approach is for organizations to implement a layered security strategy that includes a balance between prevention, threat detection and remediation – starting with a zero-trust security strategy. Initiating zero-trust security requires both an operational framework and a set of key technologies designed for modern enterprises to better secure digital assets. 


Stateful Applications in Kubernetes: It Pays to Plan Ahead

Maybe you want to go with a pure cloud solution, like Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS) or Azure Kubernetes Service (AKS). Or perhaps you want to use your on-premises data center for solutions like RedHat’s OpenShift or Rancher. You’ll need to evaluate all the different components required to get your cluster up and running. For instance, you’ll likely have a preferred container network interface (CNI) plugin that meets your project’s requirements and drives your cluster’s networking. Once your clusters are operational and you’ve completed the development phase, you’ll begin testing your application. But now, your platform team is struggling to maintain your stateful application’s availability and reliability. As part of your stateful application, you’ve been using a database like Cassandra, MongoDB or MySQL. Every time a container is restarted, you begin to see errors in your database. You can prevent these errors with some manual intervention, but then you’re missing out on the native automation capabilities of Kubernetes.


Understanding Kubernetes Compliance and Security Frameworks

Compliance has become crucial for ensuring business continuity, preventing reputational damage and establishing the risk level for each application. Compliance frameworks aim to address security and privacy concerns through easy monitoring of controls, team-level accountability and vulnerability assessment—all of which present unique challenges in a K8s environment. To fully secure Kubernetes, a multi-pronged approach is needed: Clean code, full observability, preventing the exchange of information with untrusted services and digital signatures. One must also consider network, supply chain and CI/CD pipeline security, resource protection, architecture best practices, secrets management and protection, vulnerability scanning and container runtime protection. A compliance framework can help you systematically manage all this complexity. ... The Threat Matrix for Kubernetes, developed from the widely recognized MITRE ATT@CK (Adversarial Tactics, Techniques & Common Knowledge) Matrix, takes a different approach based on today’s leading cyberthreats and hacking techniques.


Authentication in Serverless Apps—What Are the Options?

In serverless applications, there are many components interacting—not only end users and applications but also cloud vendors and applications. This is why common authentication methods, such as single factor, two-factor and multifactor authentication offer only a bare minimum foundation. Serverless authentication requires a zero-trust mentality—no connection should be trusted, even communication between internal components of an application should be authenticated and validated. To properly secure serverless authentication, you also need to use authentication and authorization protocols, configure secure intraservice permissions and monitor and control incoming and outgoing access. ... A network is made accessible through a SaaS offering to external users. Access will be restricted, and every user will require the official credentials to achieve that access. However, this brings up the same problems raised above—the secrets must be stored somewhere. You cannot manage how your users access and store the credentials that you provide them with; therefore, you should assume that their credentials are not being kept securely and that they may be compromised at any point.


The economics behind adopting blockchain

If we take the insurance sector as a use case, we can see how blockchain mitigates various issues around information asymmetries. One fundamental concern in the insurance sector is the principal-agent problem, which stems from conflicting incentives amidst information asymmetry between the principal (insurer company) and the agent (of the company). Some adverse outcomes of this include unprofessional conduct, agents forging documents to meet assigned targets as well as a misrepresentation of the compliances, often leading to misselling of insurance products. These problems occur primarily due to the absence of an integrated mechanism to track and prevent fraudulent conduct of the agent. In such a scenario, blockchain has the ability to bridge the gaps and enhance the customer experience by virtue of providing a distributive, immutable and transparent rating system that allows agents to be rated according to their performance by companies as well as clients.


Techstinction - How Technology Use is Having a Severe Impact on our Climate

Like most large organisations, there is a general consciousness of the impact the Financial Services Industry is having on the environment. All three of these banks are taking serious measures to reduce their CO2 emissions and to change the behaviours of their staff. The Natwest group (who own RBS) for example recently published a working from home guide to their employees containing tips on how to save energy. Whilst this and all sustainability measures should be applauded, it’s important to acknowledge that "Sustainability in our work place" is very different and less important than "sustainability in our work", simply because there is more to be gained by optimising what we are doing as opposed to where we do it, both financially and for the environment. Sustainability in our work involves being lean in everything we do, including the hardware infrastructure being used, being completely digital in the services provided as well as how we produce software to deliver these services. All the major cloud providers invest heavily in providing energy efficient infrastructure as well using renewable energy sources.


How machine learning speeds up Power BI reports

Creating aggregations you don't end up using is a waste of time and money. "Creating thousands, tens of thousands, hundreds of thousands of aggregations will take hours to process, use huge amounts of CPU time that you're paying for as part of your licence and be very uneconomic to maintain," Netz warned. To help with that, Microsoft turned to some rather vintage database technology dating back to when SQL Server Analysis Service relied on multidimensional cubes, before the switch to in-memory columnar stores. Netz originally joined Microsoft when it acquired his company for its clever techniques around creating collections of data aggregations. "The whole multidimensional world was based on aggregates of data," he said. "We had this very smart way to accelerate queries by creating a collection of aggregates. If you know what the user queries are, [you can] find the best collection of aggregates that will be efficient, so that you don't need to create surplus aggregates that nobody's going to use or that are not needed because some other aggregates can answer [the query].


How GitOps Benefits from Security-as-Code

The emergence of security-as-code signifies how the days of security teams holding deployments up are waning. “Now we have security and app dev who are now in this kind of weird struggle — or I think historically had been — but bringing those two teams together and allowing flexibility, but not getting in the way of development is really to me where the GitOps and DevSecOps emerge. That’s kind of the big key for me,” Blake said. ... Developers today are deploying applications in an often highly distributed microservices environment. Security-as-code serves to both automate security for CI/CD with GitOps while also ensuring security processes are taking interconnectivity into account. “It’s sort of a realization that everything is so interconnected — and you can have security problems that can cause operational problems. If you think about code quality, one of your metrics for ‘this is good code’ doesn’t cause a security vulnerability,” Omier said. “So, I think a lot of these terms really come from acknowledging that you can’t look at individual pieces, when you’re thinking about how we are doing? ..."


The role of Artificial Intelligence in manufacturing

There are few key advantages which make the adoption of AI particularly suitable as launching pads for manufacturers to embark on their cognitive computing journey – intelligent maintenance, intelligent demand planning and forecasting, and product quality control. The deployment of AI is a complex process, as with many facets of digitisation, but it has not stopped companies from moving forward. The ability to grow and sustain the AI initiative over time, in a manner that generates increasing value for the enterprise, is likely to be crucial to achieving early success milestones on an AI adoption journey. Manufacturing companies are adopting AI and ML with such speed because by using these cognitive computing technologies, organisations can optimise their analytics capabilities, make better forecasts and decrease inventory costs. Improved analytics capabilities enable companies to switch to predictive maintenance, reducing maintenance costs and reducing downtime. The use of AI allows manufacturers to predict when or if functional equipment will fail so that maintenance and repairs can be scheduled in advance.


What the metaverse means for brand experiences

The metaverse is best described as a 3D World Wide Web or a digital facsimile of the physical world. In this realm, users can move about, converse with other users, make purchases, hold meetings, and engage in all manner of other activities. In the metaverse, all seats at live performances are front and center, sporting events are right behind home plate or center court, and of course, all avatars remain young and beautiful — if that’s what you desire — forever. As you might imagine, this is a marketer’s dream. Anheuser-Busch InBev global head of technology and innovation Lindsey McInerney explained to Built In recently that marketing is all about getting to where the people are, and a fully immersive environment is ripe with all manner of possibilities, from targeted marketing and advertising opportunities to fully virtualized brand experiences. Already, companies like ABB are experimenting with metaverse-type marketing opportunities, such as virtual horse racing featuring branded NFTs.



Quote for the day:

"Making those around you feel invisible is the opposite of leadership." -- Margaret Heffernan

Daily Tech Digest - October 29, 2021

How to become an entrepreneurial engineer and create your own career path

"To be a successful entrepreneurial engineer, you must wear two hats: one with a deep technical focus and the other focused on the goals of the business," said Loren Goodman, CTO and co-founder of InRule Technology. "This allows you to make decisions in real-time leveraging your understanding of diminishing returns on both fronts. The why, the what and the how are traditionally separated, and small changes to any part can have exaggerated effects on the others. You bring this thinking together—for example, knowing that a feature can be done in a fraction of the time if a small part was removed from scope and also knowing that that part is not core to the business need." Goodman stressed that entrepreneurial engineers must be curious about the bigger picture and be unafraid to take on challenging problems. They must also be success-focused, with a relentless passion for achieving the best solution to difficult problems, no matter how unrealistic things might seem. Finally, he said, a successful entrepreneurial engineer must be scrappy: "You are going to have to be comfortable working without all the necessary resources for a long time while still staying focused on your objectives."


Forensic Monitoring of Blockchains Is Key for Broader Industry Adoption

In the event that an adversary corrupts more than 1/3 of the master nodes in the BFT committee of any given epoch, it is then technically possible for said adversary to violate the safety and jeopardize the consensus by creating forks, resulting in two or more finalized blockchains. However, certain messages would need to be signed and sent by these nodes to make this happen, which can then be detected by the system immediately after a fork with a length of only one appears. The signed messages can then be used as irrefutable proof of the misbehavior. Those messages are embedded into the blockchain and can be obtained by querying master nodes for forked blockchains. This is what enables the forensic monitoring feature, which can identify as many Byzantine master nodes as possible, all while obtaining the proof from querying as few witnesses as possible. For example, two separate honest nodes, each having access to one of the two conflicting blockchains respectively, is sufficient for the proof.


Infrastructure-as-Code: 6 Best Practices for Securing Applications

Research from security platform provider Snyk reveals that many companies are only starting out on their IaC journey, with 63% just beginning to explore the technology and only 7% stating they’ve implemented IaC to meet current industry standards. And with this practice comes changes in responsibility: IaC further extends developers’ responsibility to include securing their code and infrastructure. Misconfigurations can easily introduce security risks if best practices are not followed. In fact, according to Gartner, “70% of attacks against containers will be from known vulnerabilities and misconfigurations that could have been remediated.” Often, security trails behind the usage of IaC, resulting in configuration issues that are only detected after applications are deployed. That doesn’t have to be the case. In fact, the best way to ensure every configuration is secure, while still benefiting from the speed and repeatability of IaC, is to build security testing for IaC into developers’ workflows, the same as other forms of code.


The shift from DevOps and security to DevSecOps: 5 key roadblocks

There is DevOps plus security, and then there’s DevSecOps. What’s the difference? In the first case, security is a third wheel. In the second, it’s the third leg of the stool—an integral part of the system that’s almost unnoticeable unless or until it disappears. Indeed, to be effective, security must be everywhere—throughout the pipeline used to build and deploy as well as the runtime environment. In the DevSecOps model, security is a shared responsibility for development, security and operations teams and throughout the entire IT lifecycle. However, many organizations are challenged to integrate, rather than just tack on, security measures. This is a huge issue when a company’s own security is at stake, but an increasing number of attacks on the software supply chain is leaving tens, hundreds, even thousands of organizations vulnerable. There are many granular recommendations for achieving DevSecOps. Here are the bigger-picture issues that your organization must address to move beyond security as an afterthought.


Agile Architecture - What Is It?

From the definition, the two very important terms emerge including, Emerging Design and Intentional Architecture. Emergent Design is the process of analyzing and extending the architecture just enough to implement and validate the next increment in the development cycle. Intentional Architecture is about seeing the big picture. Large corporations need to simultaneously respond to new business challenges with large-scale architectural initiatives. On large scale we can understand that to meet the business objective, multiple teams, products, and systems will be involved. In this case, Emergent Design is not enough as it is circumscribed in a single team. Without Intentional Architecture, we can have several problems such as difficulty integrating, validate and maintaining the fulfillment of non-functional system requirements, low reuse, redundancy of solutions, etc. The intentional architecture will give the teams a common objective/destination to be reached, allowing the alignment of efforts and the parallelization of the work of independent teams. In other words, it will be the guiding track, the glue between the teams' work.


NRA Reportedly Hit By Russia-Linked Ransomware Attack

The NRA did not immediately respond to Information Security Media Group's request for comment. But Andrew Arulanandam, managing director of public affairs for the NRA, took to Twitter to say: "NRA does not discuss matters relating to its physical or electronic security. However, the NRA takes extraordinary measures to protect information regarding its members, donors, and operations - and is vigilant in doing so." Allan Liska, a ransomware analyst at the cybersecurity firm Recorded Future, told NBC that Grief is "the same group" as Evil Corp. The news outlet verified that the information in the leaked files includes grant proposal forms, names of recent grant recipients, an email sent to a grant winner, a federal W-9 form and minutes from the organization's virtual meeting in September. Sam Curry, CSO of Cybereason, tells ISMG, "It's unlikely this is a strategic attack, but time will tell. The way it would be strategic is to further divide the left from the right in the U.S. … The most likely scenario is that it's motivated by greed, and it has the potential to inadvertently explode politically. The next move is in the NRA's hands."


Is the Indian SaaS Story Overhyped?

Experts watching the SaaS space opine that after Freshworks recent listing, global perception towards Indian SaaS companies has changed. Last month, Freshworks became the first Indian software maker to list on Nasdaq. “SaaS companies in India are gaining acceptance and attention from investors. Initially, investors were slow due to the nature of revenue which is a money sucker but as the customer base grew with a lower drop, the revenue started to look good. Things have changed a lot after Postman and Freshworks. Indian SaaS companies are now seriously looked at as potential unicorns,” said Anil Joshi, managing partner, Unicorn India Ventures. The SaaS ecosystem is relatively nascent in India and is led by players such as Freshworks, Capillary, Eka, etc., said Anurag Ramdasan, partner, 3one4 Capital. “While there are double-digit unicorns in Indian SaaS today, it’s still a very early ecosystem and we are seeing a lot of innovative SaaS in the seed to series A stage in India,” he said. Many companies that have become soonicorns and unicorns have great consumer stories and investors today look at India as a huge consumer story.


How do I select an SD-WAN solution for my business?

Network security is also gaining greater importance as cyber-security threats multiply, leading to cloud-based security techniques converging with SD-WAN in the SASE framework. But the transition to these technologies can be challenging, with significant support required from the SD-WAN partner. Therefore, enterprises need to evaluate SD-WAN providers based on three principal criteria. First, does the provider’s network reach align with the enterprise’s geographic locations and does the provider offer a Tier 1 IP backbone to realize the full performance advantages of SD-WAN? Second, does the provider offer a managed SD-WAN, including local internet or MPLS access, with end-to-end delivery, technical implementation support, and service assurance to help manage complexity? Third, does the provider have a clear SASE roadmap integral to its SD-WAN vision? This includes services like zero-trust network access (ZTNA) and cloud access security broker (CASB) for remote workers and cloud firewall and secure web gateway (SWG) to support the branch level.


The Rise of Event-Driven Architecture

In the REST framework, an API isn’t aware of the state of objects. The client queries the API to find out the state, and the role of the API is to respond to the client with the information. However, with an event-driven API, a client can subscribe to the API, effectively instructing it to monitor the state of objects and report back with real-time updates. Therefore, behavior shifts from stateless handling of repeatable, independent requests to stateful awareness of the virtual objects modeled on real-world operations. Event-driven APIs are a great way to meet the demands of modern end-users who expect customized and instantaneous access to information. Applying these APIs is easy to do in one-off, bespoke environments. However, things get more complicated when you need to offer this level of service at scale, and not every enterprise is ready to handle that level of complexity. To avoid amassing significant technical debt, organizations and developers should offload this complexity to a third party with the capabilities to synchronize digital experiences in real-time and at scale.


We Are Testing Software Incorrectly and It's Costly

The tests you write are tightly coupled to the underlying design of your code. Design is constantly evolving. You now not only have to refactor the designs of your production code — you have to change your tests, too! In other words, your tests should help you with the refactoring, giving confidence, but instead, it is only making you work harder and it's giving no confidence of things still working correctly. I will not even mention the mock hell for brevity (please Google about it). But instead of abandoning refactoring or unit tests, all you need to do is free yourself from the mistaken definition of "unit testing." Focus on testing behaviors! Instead of writing unit tests for every public method of every class, write unit tests for every component (i.e., user, product, order, etc.), covering every behavior of each component and focusing on the public interface of the unit. To achieve that, you will need to learn how to structure your code properly. Please don't package your code by technical concerns (controllers, services, repositories, etc.). Senior devs structure their code by domain.



Quote for the day:

"The world's greatest achievers have been those who have always stayed focussed on their goals and have been consistent in their efforts." -- Roopleen

Daily Tech Digest - October 28, 2021

Using Complex Networks to improve Machine Learning methods

Let’s start by defining what a complex network is: a collection of entities called nodes connected between themselves by edges that represent some kind of relationship. If you’re thinking: this is a graph! Well, you are correct, most complex networks can be considered a graph. However, complex networks usually scale up to thousands or millions of nodes and edges, which can make them pretty hard to analyze with standard graph algorithms. There is a lot of synergy between complex networks and the data science field because we have tools to try and understand how the network is built and what behavior we can expect from the entire system. Because of that, if you can model your data as a complex network, you have a new set of tools to apply to it. In fact, there are many machine learning algorithms that can be applied to complex networks and also algorithms that can leverage network information for prediction. Even though this intersection is relatively new, we can already play around with it a bit.


How to Find a Mentor and Get Started in Open Source

What separates open source from its proprietary counterpart is the open source community, made up of a mix of volunteers, super-fans and über-users of a product or suite of products. So while it’s reasonably overwhelming to think where to start, there’s the unique benefit of built-in communities to support you. It’s good to start with an idea of what you want to get out of your contribution — a job, a mentor, experience in a methodology, service, interest or coding language. Use the CNCF project landscape to search by your interest — monitoring, securing, or deploying, for example — or by organization or skillset. Next, think if you want to be part of one of the biggest, horizontal communities or if you’re feel more comfortable in a smaller niche. And then it’s about deciding what you want to put in to achieve that goal. For Mohan, contributing to open source projects gives her experience in a wider breadth of technologies outside of her job, including in Kubernetes and chaos engineering.


Securing a New World: Navigating Security in the Hybrid Work Era

Security doesn’t get any easier with some workers returning to the office, others staying home and quite a few doing a bit of both. That’s because the office, which was once the company’s security standard, is often full of devices that have been sitting idle since early last year. Security patches, which are issued all the time, are important to install at the point they’re published. But a computer that has been turned off for a year, unable to download patches, is a vulnerable device. And there may be dozens or even hundreds of patches waiting in the queue that are needed to bring a device up to par. There are, not surprisingly, a host of recommendations that experts have offered to help security teams in their work. Educating employees on the threats that people and companies face is one of their top suggestions. A survey from Proofpoint’s State of the Phish report emphasizes the need for a people-centric approach to cybersecurity protections and awareness training that accounts for changing conditions, like those constantly experienced throughout the pandemic. 


Now’s the time for more industries to adopt a culture of operational resilience

When you think about resiliency and doing work in operational models, it’s a verb-based system, right? How are you going to do it? How are you going to serve? How are you going to manage? How are you going to change, modify, and adjust to immediate recovery? All of those verbs are what make resiliency happen. What differentiates one business sector from another aren’t those verbs. Those are immutable. It’s the nouns that change from sector to sector. So, focusing on all the same verbs, that same perspective we looked at within financial services, is equally as integratable when you think about telecommunications or power. ... We’re seeing resiliency in the top five concerns for board-level folks. They need a solution that can scale up and down. You cannot take a science fair project and impact an industry nor provide value in the quick way these firms are looking for. The idea is to be able to try it out and experiment. And when they figure out exactly how to calibrate the solution for their culture and level of complexity, then they can rinse, repeat, and replicate to scale it out.


AWS's new quantum computing center aims to build a large-scale superconducting quantum computer

The launch of the AWS Center for Quantum Computing sees Amazon reiterating its ambition to take a leading role in the field of quantum computing, which is expected to one day unleash unprecedented amounts of compute power. Experts predict that quantum computers, when they are built to a large enough scale, will have the potential to solve problems that are impossible to run on classical computers, unlocking huge scientific and business opportunities in fields like materials science, transportation or manufacturing. There are several approaches to building quantum hardware, all relying on different methods to control and manipulate the building blocks of quantum computers, called qubits. AWS has announced that the company has chosen to focus its efforts on superconducting qubits -- the same method used by rival quantum teams at IBM and Google, among others. AWS reckons that superconducting processors have an edge on alternative approaches: "Superconducting qubits have several advantages, one of them being that they can leverage microfabrication techniques derived from the semiconductor industry," Nadia Carlsten tells ZDNet.


The causes of technical debt, and how to mitigate it

There is no single silver bullet that will fix technical debt. Instead, it needs to be addressed in a multi-faceted way. First, there needs to be a better cultural understanding across the entire business regarding precisely what it is. Importantly, stakeholders, including product owners, must also understand how their actions and decisions may be contributing. Going back to the credit card analogy, it helps if stakeholders can bear in mind that they could be dealing with 22% or higher annual interest. In such a case, the temptation to ‘spend’ beyond the team’s limits and live with minimum payments is less tempting. To pay off existing architectural and other types of technical debt, teams should compare their current minimum payments and the impact of those on overall velocity and team morale with the staggering expense of re-architecting part or all of a solution. Moving from a monolith to microservices is a good example. As mentioned, however, there is no one-size-fits-all solution. Long-term maintenance and ‘expenses’ need to be considered as well.


Why aren’t optical disks the top choice for archive storage?

Optical media is also designed with full backwards compatibility, meaning future BD-R and ODA drives will be able to read disks written in today’s drives. For example, you can read a CD-R disk written in 1991 in a current BD-R drive. In contrast, LTO-8 tape drives cannot read LTO-5 tape although they can read LTO-6 tapes. BD-R drives advertise a lifetime of 50 years and Sony advertises 100 years, both of which are longer than tape (30 years) and magnetic hard drives (five years). If you wanted a 50-year archive on LTO, you would be forced to migrate data at least once to avoid bit rot but not, as some optical marketing material suggests, every 10 years. Many people do this anyway to allow them to retire older tape drives and achieve greater storage density. There is also no current requirement to re-tension the tapes every so often. There is some debate about the bit error rate of optical versus tape, but that is a complex issue beyond the scope of this article.


How to develop a high-impact team

Innovation is increasingly becoming a team sport, requiring diverse perspectives and collective intelligence. These innovation-focused teams tend to be ephemeral. They form, collaborate, and disband quickly. Team members need to be able to step up and step back with equal ease. To participate in this fast, fluid model of leadership, less assertive employees (and those uninterested in careers in management) will likely need help stepping up. To get these reluctant leaders to step up and then step back, provide a path of retreat. Show them that being a designated leader can be a temporary assignment, existing for the duration of a project or even for just a single meeting. Some team members will need encouragement and support to become “step-up” leaders, but others will do so with ease. It can take work to then get them to step back and support others. You can help these people develop a more fluid leadership style by modeling healthy followership practices. Let them see you collaborating with a peer organization or contributing to a project led by someone below you in the management hierarchy.


Why automation progress stalls: 3 hidden culture challenges

“A general challenge with putting automation in place is that IT culture often focuses on heroic problem-solving rather than more mundane processes that prevent problems from happening in the first place,” says Red Hat technology evangelist Gordon Haff. “Automation has long been part of the picture – think system admins writing Bash scripts – but it’s also been reactive rather than proactive.” If your organization has treated automation mostly as a reactive problem-solver in the past, people may be less inclined to instinctively grasp its greater value. That’s where leaders have work to do in terms of communicating your big-picture plan and the role that automation – and everyone on the team – plays in it. This is also a mindset that must shift over time with experience and results: Automation should be as much (or more) about improvement and optimization as it is about dousing production fires or cutting costs. Ideally, automation should be boring, in the best possible sense of the word. “Modern automation practices, such as we often see in SRE roles, make automating systems and workflows part of the daily routine,” Haff says.


Regulation fatigue: A challenge to shift processes left

President Biden’s recent executive order asks government vendors to attest “to the extent practicable, to the integrity and provenance of open source software used within any portion of a product.” The president’s recent order, and the potential actions of legislators to follow, could lead to burdensome regulations that interfere with shift left practices, and ultimately slow down the pace of software development. The challenge with the directive is that nearly 60 percent of software developers have little to no secure coding training. Developers are traditionally focused on pushing out innovative, stable products, not triaging security alerts. They want to use open-source code without thinking about its possible security risks. Developers rely on open-source components because these are ready-made pieces of code that allow them to keep up with competitive release time frames. They often leave it to their security teams to identify mistakes at the end of the development process. Developers’ reliance on open-source components often presents a challenge to the cautious attitude of security teams. 



Quote for the day:

"Leaders, be mindful that there is a tendency to become arrogant. Such hubris blinds even the best intentions. Lead with humility." -- S Max Brown

Daily Tech Digest - October 27, 2021

Node.js makes fullstack programming easy with server-side JavaScript

Web application developers are inundated with options when it comes to choosing the languages, frameworks, libraries, and environments they will use to build their applications. Depending on which statistics you believe, the total number of available languages is somewhere between 700 and 9000. The most popular—for the past nine years according to the 2021 Stack Overflow Developer Survey—is JavaScript. Most people think of JavaScript as a front-end language. Originally launched in 2009, Node.js has quickly become one of the most widely used options among application developers. More than half of developers are now using Node.js—it is the most popular non-language, non-database development tool. It allows you to run JavaScript on the server side, which lets software engineers develop on the full web stack. Node.js’s popularity has snowballed for good reason. Node.js is a fast, low-cost, effective alternative to other back-end solutions. And with its two-way client-server communication channel, it is hard to beat for cross-platform development.


Your Data Plane Is Not a Commodity

If you are going to invest a ton of time, effort and engineering hours in a service mesh and a Kubernetes rollout, why would you want to buy the equivalent of cheap tires – in this case, a newer and minimally tested data plane written in a language that may not even have been designed to handle wire-speed application traffic? Because, truly, your data plane is where the rubber meets the road for your microservices. The data plane is what will directly influence customer perceptions of performance. The data plane is where problems will be visible. The data plane will feel scaling requirements first and most acutely. A slow-to-respond data plane will slow the entire Kubernetes engine down and affect system performance. Like tires, too, the date plane is relatively easy to swap out. You do not necessarily need major surgery to pick the one you think is best and mount them on your favorite service mesh and Kubernetes platform, but at what cost?


Why traditional IP networking is wrong for the cloud

Of course, the IP networking layer does provide a way to connect your data center to the cloud. However, one of the main challenges of legacy networking is that it provides limited visibility into applications in the cloud—the lifeblood of enterprises today and arguably the primary driver behind cloud adoption. At Layer 7, or the so-called application layer, enterprises have a holistic view of what takes place at that level (applications and collections of services) as well as in the stack below, such as at TCP and UDP ports and IP endpoints. By operating with the traditional stack (i.e, the IP layer) alone, enterprise teams have a substantially harder time viewing what is above them in the stack. They have a view of the network alone, and blind spots for everything else. Why does this matter? For one, it can significantly increase remediation time when performance problems occur. Indeed, enterprises need to understand how their cloud infrastructure works in relation to the application and A/B test configurations to align with application performance.


Defining the Developer Experience

Microservices architecture and cloud-native applications go hand in hand. Most organizations leverage a microservice architecture to decouple and achieve greater scale, as without it you have too many people changing the same code, causing velocity to slow as friction increases. Where in monolithic architecture, teams would be bumping into each other to merge, release, and deploy their changes to the monolith, in a microservices architecture, each team can clearly define the interfaces between their components, limiting the size and complexity of the codebase they are managing to that of a smaller, more agile team. Each team can move more quickly since they can focus on the components they own. Their level of friction and velocity can be that of just the group working on that component, not that of the larger development organization. ... But this creates its own problems as well, a key being the complexity of needing to ensure the cohesive whole also gets tested and functions together as a complete software product.


How we built a forever-free serverless SQL database

How can we afford to give this away? Well, certainly we’re hoping that some of you will build successful apps that “go big” and you’ll become paying customers. But beyond that, we’ve created an innovative Serverless architecture that allows us to securely host thousands of virtualized CockroachDB database clusters on a single underlying physical CockroachDB database cluster. This means that a tiny database with a few kilobytes of storage and a handful of requests costs us almost nothing to run, because it’s running on just a small slice of the physical hardware. ... Given that the SQL layer is so difficult to share, we decided to isolate that in per-tenant processes, along with the transactional and distribution components from the KV layer. Meanwhile, the KV replication and storage components continue to run on storage nodes that are shared across all tenants. By making this separation, we get “the best of both worlds” – the security and isolation of per-tenant SQL processes and the efficiency of shared storage nodes.


Why Outdated jQuery Is Still the Dominant JavaScript Library

Despite its enormous usage, developers today may not even be aware that they’re using jQuery. That’s because it’s embedded in a number of large projects — most notably, the WordPress platform. Many WordPress themes and plugins rely on jQuery. The jQuery library is also a foundational layer of some of today’s most popular JavaScript frameworks and toolkits, like AngularJS and Bootstrap (version 4.0 and below). “A lot of the surprise about jQuery usage stats comes from living in a bubble,” GoÅ‚Ä™biowski-Owczarek told me. “Most websites are not complex Web apps needing a sophisticated framework, [they are] mostly static sites with some dynamic behaviors — often written using WordPress. jQuery is still very popular there; it works and it’s simple, so people don’t feel the need to stop using it.” jQuery will continue to be a part of WordPress for some time to come, if for no other reason that it would be difficult to remove it without breaking backward compatibility. 


How AI and AR are evolving in the workplace

Businesses are also using AR-based apps for tracking, identifying, and resolving technical issues as well as for tasks, such as retrofitting, assembling, manufacturing, and repairing production lines. The AI market is not only anticipated to help the development of enterprise, it is also believed that the technology can also help to achieve business growth objectives and generate value. Nine out of 10 C-suite executives believe they must leverage AI to achieve their growth objectives. ... The challenge of deploying evolving technologies, is always that until they have fully matured, integration can be a challenge. With smart glasses as well, there can also be security and privacy concerns. In medical and surgical settings for example, the use of cameras in operation rooms is very sensitive and controversial. For sensitive scenarios like these, the use of such devices must be agreed and understood to be for the benefit of all beforehand. While AI is a more developed technology, it is also costly, and may require a strong upfront investment.


Good security habits: Leveraging the science behind how humans develop habits

There is a secret recipe for good security habits that we’ve discovered from decades of research: it’s called the habit loop. And you can use the habit loop to hack your own brain for better security. You start with a prompt – which is just the signal that tells you to start a behavior. Then there’s the behavior itself. And finally, the most important step, giving yourself a reward. Even if the reward is just patting yourself on the back, your brain starts to release endorphins so when you see the prompt again next time, your brain will want to do that behavior again to receive another reward. Security can seem scary to some people while to others it might feel like it’s too much work. Using the habit loop can help make security feel easy, because we don’t have to think about habits: by definition they are what we do when we’re on autopilot. But since habits make up about 50% of everything we do in our lives, it’s also the best way to have a massive impact on our security.


More Tech Spending Moves Out of IT

Karamouzis says this is leading to a shift in how organizations buy technology. Enterprises had previously moved from buying products to buying solutions -- a combination of products and services. These products and solutions were purchased in a serial fashion. That doesn’t work anymore, says Karamouzis because now you must make four to 10 buying decisions concurrently to ensure different digital business initiatives lead to growth. This is part of a new way organizations buying; they are buying “outcomes,” she says. These changes have pushed organizations more to the public cloud, making enterprises and the entire global economy increasingly dependent on internet-delivered services. The most important of these services are provided directly by or running within hyperscale cloud services providers, says Gartner VP analyst Jay Heiser. “As everything becomes digital, virtually every aspect of society and the economy will have dependence upon the real-time functioning of a small number of public cloud services,” Heiser says.


Why Soul-Based Leadership Will Change the Nature of Remote and Hybrid Work

One of the most highly researched and evidence-based ways to invigorate executive function is through the ancient practice of mindfulness. Although it’s taken on a relatively "pop" aura relative to 2500 years ago, developing mindfulness is actually hard work! But the payoff is big in terms of making more informed decisions and leading with care. I often recommend one technique I learned from one of my teachers that I’ve personally modified a bit and called the Standing Ground Practice. You can be anywhere: sitting or standing at your desk or waiting on a corner to meet a friend. It’s ideal if you can go outside and stand facing a tree or something alive that’s naturally rooted in the earth, but it’s not necessary for the practice to be effective in this context. After finding your spot, bring your attention to the contact point between your feet and the ground or floor beneath you. Focus on that point and consider what it feels like. Thoughts about all kinds of things will most certainly interrupt. 



Quote for the day:

"Discipline is the bridge between goals and accomplishment." -- Jim Rohn

Daily Tech Digest - October 25, 2021

Why you should use a microservice architecture

Simply moving your application to a microservice-based architecture is not sufficient. It is still possible to have a microservice-based architecture, but have your development teams work on projects that span services and create complex interactions between your teams. Bottom line: You can still be in the development muck, even if you move to a microservice-based architecture. To avoid these problems, you must have a clean service ownership and responsibility model. Each and every service needs a single, clear, well-defined owner who is wholly responsible for the service, and work needs to be managed and delegated at a service level. I suggest a model such as the Single Team Oriented Service Architecture (STOSA). This model, which I talk about in my book Architecting for Scale, provides the clarity that allows your application—and your development teams—to scale to match your business needs. Microservice architectures do come at a cost. While individual services are easier to understand and manage, a microservices application as a whole has significantly more moving parts and becomes a more complex beast of its own.


Routine is a new productivity app that combines task management and notes

One of the most opinionated feature of Routine is the dashboard. Whatever you’re doing on your computer, you can pull up the Routine dashboard with a simple keyboard shortcut. By default, that shortcut is Ctrl-Space. The Routine app adds an overlay on top of your screen with a few widgets. It looks a bit like the now-defunct Dashboard on macOS. On that dashboard, you’ll find a handful of things. On the left, you can see the tasks you have to complete today. On the right, you can see how much time you have left before your next meeting and some information about that event. The date is pulled directly from your Google Calendar account. In the center of the screen, Routine displays a big input field called the Console. You can type text and then press enter to create a new task from there. It works a bit like the ‘Quick Add’ feature in Todoist. The idea is that you can add a task without wasting time opening your to-do app, moving to the right project, clicking the add task button and entering text into several fields. With Routine, you can press Ctrl-Space, type some text, press enter and you’re done.


3 Lessons I Learned From The Hard Way As A Data Scientist

Whatever algorithm you implement or analysis you make, the results are used in the continuing processes or production. Thus, it is of vital importance to make sure the results are correct. By results being correct, I do not mean not having any errors on your predictions or hitting 100% accuracy which is not reasonable or legitimate. In fact, you should be really suspicious of results which are too good to be true. The mistakes I mention are usually data related issues. For instance, you might be making a mistake while joining stock information of products from an SQL table to your main table. It results in serious problems if your solution is based on product stocks. There are almost always controls in your code that prevent making mistakes. However, it is not possible for us to think of each and every possible mistake. Thus, taking a second look is always beneficial. ... The glorious world of machine learning algorithms is very attractive. The urge for using a fancy algorithm and building a model to perform some predictions might cause you to skip digging into the data.


Research finds consumer-grade IoT devices showing up... on corporate networks

"Remote workers need to be aware that IoT devices could be compromised and used to move laterally to access their work devices if they're both using the same home router, which in turn could allow attackers to move onto corporate systems," said Palo Alto. Poor IoT device security stems mainly from manufacturers' desire to keep price points low, cutting security out as an unnecessary overhead. This approach inadvertently exposed large numbers of easily pwned devices to the wider internet – causing such a headache that governments around the world are now preparing to mandate better IoT security standards. Even IoT trade groups have woken up to the threat, albeit perhaps the threat of regulation rather than the security threat, but if that's what it takes, the outcome is no bad thing. ... Half of respondents said they worried about attacks against their industrial IoT devices, with 46 per cent being similarly worried about connected cameras being compromised. Smart cameras are a tried-and-trusted compromise method for miscreants


The Rise Of No-Code And Low-Code Solutions: Will Your CTO Become Obsolete?

There are many reasons behind the rise of no-code and low-code tools, but the key one is a large imbalance between the ever-growing demand for software development services and the shortage of skilled developers in the market. For decades, there's been movement toward a withdrawal from complicated coding in favor of easy-to-use visual tools. However, over time, no-code and low-code platforms have become more sophisticated, allowing non-developers to build more powerful websites and applications without hiring software specialists. That has even evoked some neo-Luddite concerns and discussions about the potential of such platforms to make good old software developers obsolete. But what’s behind it? Both no-code and low-code approaches hide the complexities of software programming under the mask of high-level abstractions. Low-coding reduces programming efforts down to minimum levels, and no-coding empowers anyone to create apps without any knowledge in programming.


Complex Systems: Microservices and Humans

There is one aspect to this that I think is worth talking about, and that is that we actually already have an organization of people. We work in organizations that are, in general, organized into teams. You see a theoretical org chart here on the left. This might look like something that you might see in your own companies. We have these org charts, and these organizations of teams. Then that org chart doesn't map very neatly onto the microservices architecture necessarily, and maybe it shouldn't. The interrelationships between these teams are actually more subtle and often more complicated than what you see in the org chart. That is because if you have microservices, and you have dependencies between these microservices and interactions between them, then the teams owning them, by necessity, sometimes need to interact with each other. Microservices are constructed in a way that gives as much independence as possible and as much autonomy as possible to the individual teams. 


Maximizing agile productivity to meet shareholder commitments

Companies’ public commitments to ambitious—and sometimes expansive—goals tend to have multiyear timelines, while agile teams are trained to focus on the next three to six months. In organizations with siloed processes, product owners often feel that they don’t have enough visibility into their organizations’ processes to forecast the timeline for their initiatives, let alone to predict the long-term impact of their work. To balance the demands of the near future with longer-term goals, the companies that meet their transformation goals support agile teams with information and expertise. Successful companies provide product owners with relevant financial and operational data for the company, benchmarked to best-in-class organizations, to help them assess the potential value of their work for the next 18–24 months. They also assign initiative owners and relevant subject-matter experts from business functions early in the research and discovery process to help quantify possible improvements to the existing journey.


Satellite IoT dreams are crashing into reality

Even with smaller satellites, building a profitable wireless network is hard. On one side, there’s a capital-intensive phase that requires establishing connectivity (in this case, by building and launching satellites) and on the other, these companies must establish a market for the connectivity. But while the economics of building and launching satellites have changed dramatically, the demand for devices that rely on satellite networks hasn’t kept up. The biggest growth has come from people-tracking products, such as the Garmin inReach walkie-talkies, which people can wear into the wilderness and use to get help if needed. There are also rumors that Apple may include some form of satellite service in an upcoming iPhone. While this is a real and growing market, however, it isn’t enough to justify the launch of constellations by almost a dozen companies whose goal is to be IoT connectivity providers. So former connectivity players eschew bandwidth and turn to full solutions in order to provide a service that isn’t a commodity and eke out more revenue per customer.


Interesting Application Garbage Collection Patterns

When an application is caching many objects in memory, ‘GC’ events wouldn’t be able to drop the heap usage all the way to the bottom of the graph (like you saw in the earlier ‘Healthy saw-tooth pattern). ... you can notice that heap usage keeps growing. When it reaches around ~60GB, the GC event (depicted as a small green square in the graph) gets triggered. However, these GC events aren’t able to drop the heap usage below ~38GB. Please refer to the dotted black arrow line in the graph. In contrast, in the earlier ‘Healthy saw-tooth pattern’, you can see that heap usage dropping all the way to the bottom ~200MB. When you see this sort of pattern (i.e., heap usage not dropping till all the way to the bottom), it indicates that the application is caching a lot of objects in memory. When you see this sort of pattern, you may want to investigate your application’s heap using heap dump analysis tools like yCrash, HeapHero, Eclipse MAT and figure out whether you need to cache these many objects in memory. Several times, you might uncover unnecessary objects to be cached in the memory. Here is the real-world GC log analysis report, which depicts this ‘Heavy caching’ pattern.


Designing the Internet of Things: role for enterprise architects, IoT architects, or both?

Great use cases, but an architectural nightmare that calls for a new role to plan and piece it all together into a coherent and viable system. This may be someone in a relatively new role, an IoT architect, or expanding the current roles of enterprise architects. The need for architects of either stripe was recently explored in a Gartner eBook, which looked at the ingredients needed to ensure success with enterprise IoT. ... Those having such capabilities in two or more of these areas will be in extremely high demand. The good news is that organizations can use existing digital business efforts to train up candidates." Responsibilities for the IoT architect role include the following: "Engaging and collaborating with stakeholders to establish an IoT vision and define clear business objectives."; "Designing an edge-to-enterprise IoT architecture."; "Establishing processes for constructing and operating IoT solutions."; and "Working with the organization's architecture and technical teams to deliver value." Then there's the enterprise architect -- who are likely to see their roles greatly expanded to encompass the extended architectures the IoT is bringing. 



Quote for the day:

"Leadership is familiar, but not well understood." -- Gerald Weinberg