Showing posts with label unit testing. Show all posts
Showing posts with label unit testing. Show all posts

Daily Tech Digest - August 16, 2023

The looming battle over where generative AI systems will run

What is becoming more apparent is that the location where most generative AI systems will reside (public cloud platforms versus on-premises and edge-based platforms) is still being determined. Vellante’s article points out that AI systems are running neck-and-neck between on-premises and public cloud platforms. Driving this is the assumption that the public cloud comes with some risk, including IP leakage, or when better conclusions from your data appear at the competition. Also, enterprises still have a lot of data in traditional data centers or on edge computing rather than in the cloud. This can cause problems when the data is not easily moved to the cloud, with data silos being common within most enterprises today. AI systems need data to be of value, and thus it may make sense to host the AI systems closest to the data. I would argue that data should not exist in silos and that you’re enabling an existing problem. However, many enterprises may not have other, more pragmatic choices, given the cost of fixing such issues. 


Quantum Computing: Australia’s Next Great Tech Challenge & Opportunity

One of the big opportunities for Australia in this space will be its close relationship with the United States. Because of the sheer value of quantum computing research and technology across both military and civilian IP, nations tend to be more circumspect about sharing information in comparison to conventional technology. The downside to this is that it means the U.S. isn’t able to draw on the same global pool of talent that it’s used to. A shortage of talent isn’t such a major issue in regular computing fields because global talent tends to pool and openly share information. ... “As other nations push forward, Australia risks missing out on the potential economic benefits,” a report by the University of Sydney notes. “We could also lose talented workers to countries that are investing more in quantum research. “Projects like the ambitious attempt to build the world’s first complete quantum computer aim to provide local opportunities and funding alongside their top-line goals. Moreover, Australia has a responsibility to ensure quantum technologies are developed and used ethically, and their risks managed.”


Q&A: An Introduction to Streaming AI

Streaming AI is about continuously training ML models using real-time data, sometimes with human involvement. The incoming data streams from many sources are analyzed, combined with contextual information, and matched against features that carry condensed information and intelligence specific to the given problem. ML algorithms continually generate these features using the most current data available. On the other hand, as noted earlier, generative AI focuses on generating responses based on a “seed” and then a pattern for finding the next thing to tack on. This works to generate content that conforms to certain parameters the model has “learned.” It is bounded, but not in a way that the boundaries can be easily understood. Until the recent rise of LLMs, considerable effort was invested in making ML models explainable to humans. The question was: how does the model arrive at its result? The “I have no idea” response is hard for humans to accept. In the made-up legal case citations example, the LLM program generated a motion that argued a point, but when asked to explain or validate its path, it just made some stuff up.


CISO’s role in cyber insurance

Enter cyber insurance, a safety net that offers organisations a way to mitigate the financial impact of these cyber incidents. However, navigating the complex landscape of cyber insurance is no small feat. This is where the Chief Information Security Officer (CISO) comes into play. As the vanguard of an organisation’s cybersecurity efforts, the CISO not only ensures that digital fortresses are robust but also plays a pivotal role in the realm of cyber insurance. Their expertise and insights are instrumental in assessing risks, selecting the right coverage, and ensuring that the organisation gets the most out of its policy. In essence, the CISO bridges the gap between the technical world of cybersecurity and the financial realm of insurance, ensuring that businesses are both well protected and well insured. ... As the primary custodian of an organisation’s cybersecurity posture, the CISO is responsible for conducting a thorough risk assessment. This involves identifying potential vulnerabilities, assessing the potential impact of different types of cyber incidents, and estimating the financial costs associated with these incidents.


Bolstering Africa’s Cybersecurity

In recent weeks and months, we have seen opportunities arise, often provided by academia and government, to improve cyber education. However, some parts of Africa are still without decent levels of electricity. So, is the dream of cyber education for all unattainable? ... Despite this, Africa-based data security analysts point out that a dearth of qualified technicians coupled with a lack of investment in cybersecurity has been the direct contributor to a growth in the amount and scale of successful cyberattacks. In fact, according to research from IFC and Google, Africa’s e-economy is expected to reach $180 billion by 2025, but its lack of security support could halt that growth. Most of these campaigns are based upon spam or phishing efforts derived from information garnered from open source intelligence (OSINT), which is often more effective against a remote workforce that may be more exposed to attack techniques while outside of the technical and administrative controls of traditional office work.


Everything Can Change: The Co-Evolution of the CMO and the CISO

Organizations with an established partnership between the CISO and CMO tend to outperform their competitors. This collaboration allows for a cohesive approach to risk management and brand protection, resulting in increased customer trust and loyalty. Organizations that view the CISO purely as a technical operational leader often struggle with cybersecurity initiatives and fail to align security measures with business goals. This approach limits the potential for strategic contributions from the CISO in driving revenue growth and defending value. On the other hand, organizations that integrate the CISO into the go-to-market strategy leverage their expertise to address security concerns proactively, enhancing customer trust and differentiating themselves from competitors. By combining security practices with marketing efforts, these organizations can communicate their commitment to data protection and establish a competitive advantage in terms of trustworthiness. Effective CISOs have a seat at the executive table, allowing them to more directly align security initiatives with business outcomes. 


Machine unlearning: The critical art of teaching AI to forget

Machine unlearning is the process of erasing the influence specific datasets have had on an ML system. Most often, when a concern arises with a dataset, it’s a case of modifying or simply deleting the dataset. But in cases where the data has been used to train a model, things can get tricky. ML models are essentially black boxes. This means that it’s difficult to understand exactly how specific datasets impacted the model during training and even more difficult to undo the effects of a problematic dataset. OpenAI, the creators of ChatGPT, have repeatedly come under fire regarding the data used to train their models. A number of generative AI art tools are also facing legal battles regarding their training data. Privacy concerns have also been raised after membership inference attacks have shown that it’s possible to infer whether specific data was used to train a model. This means that the models can potentially reveal information about the individuals whose data was used to train it.


Unit Tests Are Overrated: Rethinking Testing Strategies

Unit tests fare much more poorly with this metric than most people realize. The first problem is that they often don’t provide useful information about the actual state of the system under review. When unit tests are written as acceptance tests, they are often intricately coupled with the specific implementation. They will only fail if the implementation changes, not when changes break the system (e.g., verifying the value of a class constant). Using acceptance tests as regression tests must be done intentionally and thoughtfully, deleting everything that does not provide useful information about the system’s behavior. Another major problem with unit tests is that to test the inputs of one method, you often need to mock out the responses from other methods. When you do this, you are no longer testing the system you have, you are testing a system that you assumed you had in the past. The system can break and a unit test will not fail because it had an assumption that an input would be received that the real-world system no longer supplies. 


The vital role the CISO has to play in the boardroom

Cybersecurity risk management and information governance are complex and gritty subjects which can be hard to follow for the uninitiated. Boardrooms aren’t the place for the ins and outs of the issue at hand. Learning to communicate effectively is possibly the single most important skill for aspiring and ambitious CISOs. Throughout history, great leaders have demonstrated an excellent ability to communicate, bringing people on a journey with them and gathering support along the way. This is not about dumbing down or glossing over the important parts. Rather, it’s about honing a fundamental business skill: being able to make a compelling argument clearly and concisely. You need to be able to translate critical cybersecurity information into business objectives. Cybersecurity risk management is a regulated requirement. Board directors, officers and senior management can be held liable for the decisions they make around cybersecurity risks and incidents. Clear and effective communication is critical in supporting organisations to make the right decisions that could be later relied upon to protect its people.


3 strategies that can help stop ransomware before it becomes a crisis

Without an incident response plan in place, companies typically panic, not knowing who to call, or what to do, which can make paying the ransom seem like the easiest way out. With a plan in place, however, people know what to do and will ideally have practised the plan ahead of time to ensure disaster recovery measures work the way they're supposed to. ... Having multiple layers of defense, as well as setting up multifactor authentication and data encryption, are fundamental to cybersecurity, but many companies still get them wrong. Stone recently worked with an educational organization that had invested heavily in cybersecurity. When they were hit by ransomware, they were able to shift operations to an offline backup. Then the attackers escalated their demands -- if the organization didn’t pay the ransom, their data would be leaked online. “The organization was well prepared for an encryption event, but not prepared for the second ransom,” Stone says. “There was actual sensitive data that would trigger a number of regulatory compliance actions.”



Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer

Daily Tech Digest - December 19, 2022

7 ways CIOs can build a high-performance team

“People want to grow and change, and good business leaders are willing to give them the opportunity to do so,” adds Cohn. Here, you can get HR involved, encouraging them to bring their expertise and ideas to the table to help you come up with the right approach to training and employee development. In addition, it’s important to remember that an empathetic leader understands that people come from different places and therefore won’t grow and develop in the same manner. Modern CIOs must approach upskilling and training with this reality in mind, advises Benjamin Marais, CIO at financial services company Liberty Group SA. You also need to create opportunities that expose your employees to what’s happening outside the business, suggests van den Berg. This is especially true where it pertains to future technologies and skills because if teams know what’s out there, they better understand what they need to do to keep up. Given the rise in competition for skills in the market, you have to demonstrate your best when trying to attract top talent and retain them, stresses Cohn. 


10 Trends in DevOps, Automated Testing and More for 2023

Developers and QA professionals are some of the most sought-after skilled laborers who are acutely aware of the value they provide to organizations. As we head into next year, this group will continue to leverage the demand for their skills in pursuit of their ideal work environment. Companies that do not consider their developer experience and force pre-pandemic systems onto a hybrid-first world set themselves up for failure, especially when tools for remote and virtual testing and quality assurance are readily available. Developer teams also need to be equally equipped for success through the tools and opportunities that can help ensure an innate sense of value to the organization – and if they don’t have the tools they need, these developers will find them elsewhere. ... We’re starting to see consolidation in both the market and in the user personas we’re all chasing. Testing companies are offering monitoring, and monitoring companies are offering testing. This is a natural outcome of the industry’s desire to move toward true observability: deep understanding of real-world user behavior, synthetic user testing, passively watching for signals and doing real-time root cause analysis—all in service of perfecting the customer experience.


The beautiful intersection of simulation and AI

Simulation models can synthesize real-world data that is difficult or expensive to collect into good, clean and cataloged data. While most AI models run using fixed parameter values, they are constantly exposed to new data that may not be captured in the training set. If unnoticed, these models will generate inaccurate insights or fail outright, causing engineers to spend hours trying to determine why the model is not working. ... Businesses have always struggled with time-to-market. Organizations that push a buggy or defective solution to customers risk irreparable harm to their brand, particularly startups. The opposite is true as “also-rans” in an established market have difficulty gaining traction. Simulations were an important design innovation when they were first introduced, but their steady improvement and ability to create realistic scenarios can slow perfectionist engineers. Too often, organizations try to build “perfect” simulation models that take a significant amount of time to build, which introduces the risk that the market will have moved on.


What is VPN split tunneling and should I be using it?

The ability to choose which apps and services use your VPN of choice and which don't is incredibly powerful. Activities like remote work, browsing your bank's website, or online shopping via public Wi-Fi can definitely benefit from the added security of a VPN, but other pursuits, like playing online games or streaming readily available content, can be hurt by the slight delay VPNs may add to your traffic. The modest decrease to your connection speed is barely noticeable for browsing, but can be disastrous for online games. Being able to simultaneously connect to sensitive sites and services through your secure VPN, and to non-sensitive games and apps means you won't constantly need to enable and disable your VPN connection when switching tasks. This is important as forgetting to enable it at the wrong time could leave you exposed to security risks. ... Split tunneling divides your network traffic in two. Your standard, unencrypted traffic continues to flow unimpeded down one path, while your sensitive and secured data gets encrypted and routed through the VPN's private network. It's like having a second network connection that's completely separate, a tiny bit slower, but also far more secure.


Why don’t cloud providers integrate?

Although it’s not an apples-to-apples comparison, Google’s Athos enables enterprises to run applications across clouds and other operating environments, including ones Google doesn’t control. As with Amazon DataZone, it’s very possible to manage third-party data sources. One senior IT executive from a large travel and hospitality company told me on condition of anonymity, “I’m sure [cloud vendors] can integrate with third-party services, but I suspect that’s not a choice they’re willing to make. For instance, they could publish some interfaces for third parties to integrate with their control plane as well as other means in the data plane.” Integration is possible, in other words, but vendors don’t always seem to want it. This desire to control sometimes leads vendors down roads that aren’t optimal for customers. As this IT executive said, “The ecosystem is being broken. Instead of interoperating with third-party services, [cloud vendors often] choose to create API-compatible competing services.” He continued, “There is a zero-sum game mindset here.” Namely, if a customer runs a third-party database and not the vendor’s preferred first-party database, the vendor has lost.


How RegTech helps financial services providers overcome regulation challenges

Two main types of RegTech capabilities are helping financial service institutions stay compliant: software that encompasses the whole system — for example a full client onboarding cycle — and software that manages a particular process, such as reporting or document management. Hugo Larguinho Brás explains: “The technologies that handle the whole process from A to Z are typically heavier to deploy, but they will allow you to cover most of your needs. These are also more expensive and often more difficult to adapt in line with a company’s specificities.” “Meanwhile, those technologies that treat part of the process can be combined with other tools. While this brings more agility, the need to find and combine several tools can also turn your target model more complex to run.” “We see more and more cloud and on-premises solutions available to asset management and securities companies, from software-as-a-service (SaaS) and platform-as-a-service (PaaS) deployed in-house, to solutions combined to outsourced capabilities ...”


What You Need to Know About Hyperscalers

Current hyperscaler adopters are primarily large enterprises. “The speed, efficiencies, and global reach hyperscalers can provide will surpass what most enterprise organizations can build within their own data centers,” Drobisewski says. He predicts that the partnerships being built today between hyperscalers and large enterprises are strategic and will continue to grow in value. “As hyperscalers maintain their focus on lifecycle, performance, and resiliency, businesses can consume hyperscaler services to thrive and accelerate the creation of new digital experiences for their customers,” Drobisewski says. ... Many adopters begin their hyperscaler migration by selecting the software applications that are best suited to run within a cloud environment, Hoecker says. Over time, these organizations will continue to migrate workloads to the cloud as their business goals evolve, he adds. Many hyperscaler adopters, as they become increasingly comfortable with the approach, are beginning to establish multi-cloud estates. “The decision criteria is typically based on performance, cost, security, access to skills, and regulatory and compliance factors,” Hoecker notes.


UID smuggling: A new technique for tracking users online

Researchers at UC San Diego have for the first time sought to quantify the frequency of UID smuggling in the wild, by developing a measurement tool called CrumbCruncher. CrumbCruncher navigates the Web like an ordinary user, but along the way, it keeps track of how many times it has been tracked using UID smuggling. The researchers found that UID smuggling was present in about 8 percent of the navigations that CrumbCruncher made. The team is also releasing both their complete dataset and their measurement pipeline for use by browser developers. The team’s main goal is to raise awareness of the issue with browser developers, said first author Audrey Randall, a computer science Ph.D. student at UC San Diego. “UID smuggling is more widely used than we anticipated,” she said. “But we don’t know how much of it is a threat to user privacy.” ... UID smuggling can have legitimate uses, the researchers say. For example, embedding user IDs in URLs can allow a website to realize a user is already logged in, which means they can skip the login page and navigate directly to content.


Bring Sanity to Managing Database Proliferation

How can you avoid being a victim of the bow wave of database proliferation? Recognize that you can allocate your resources in a way that benefits both your bottom line and your stress level by consolidating how you run and manage modern databases. Investing heavily in self-managing the legacy databases used in high volume by many of your people makes a lot of sense. Database workloads that are typically used for mission-critical transaction processing, such as IBM DB2 in financial services, are subject to performance tuning, regular patching and upgrading by specialized database administrators in a kind of siloed sanctum sanctorum. Many organizations will hire an in-house Oracle or SAP Hana expert and create a team, ... But what about the 40 other highly functional, highly desirable cloud databases in your enterprise that aren’t used as often? Do you need another 20 people to manage them? Open source databases like MySQL, MongoDB, Cassandra, PostgreSQL and many others have gained wide adoption, and many of their use cases are considered mission-critical. 


An Ode to Unit Tests: In Defense of the Testing Pyramid

What does the unit in unit tests mean? It means a unit of behavior. There's nothing in that definition dictating that a test has to focus on a single file, object, or function. Why is it difficult to write unit tests focused on behavior? A common problem with many types of testing comes from a tight connection between software structure and tests. That happens when the developer loses sight of the test goal and approaches it in a clear-box (sometimes referred to as white-box) way. Clear-box testing means testing with the internal design in mind to guarantee the system works correctly. This is really common in unit tests. The problem with clear-box testing is that tests tend to become too granular, and you end up with a huge number of tests that are hard to maintain due to their tight coupling to the underlying structure. Part of the unhappiness around unit tests stems from this fact. Integration tests, being more removed from the underlying design, tend to be impacted less by refactoring than unit tests. I like to look at things differently. Is this a benefit of integration tests or a problem caused by the clear-box testing approach? What if we had approached unit tests in an opaque-box approach?



Quote for the day:

"Strategy is not really a solo sport even if you_re the CEO." -- Max McKeown

Daily Tech Digest - July 28, 2022

The Beautiful Lies of Machine Learning in Security

The biggest challenge in ML is availability of relevant, usable data to solve your problem. For supervised ML, you need a large, correctly labeled dataset. To build a model that identifies cat photos, for example, you train the model on many photos of cats labeled "cat" and many photos of things that aren't cats labeled "not cat." If you don’t have enough photos or they're poorly labeled, your model won't work well. In security, a well-known supervised ML use case is signatureless malware detection. Many endpoint protection platform (EPP) vendors use ML to label huge quantities of malicious samples and benign samples, training a model on "what malware looks like." These models can correctly identify evasive mutating malware and other trickery where a file is altered enough to dodge a signature but remains malicious. ML doesn't match the signature. It predicts malice using another feature set and can often catch malware that signature-based methods miss. However, because ML models are probabilistic, there's a trade-off. ML can catch malware that signatures miss, but it may also miss malware that signatures catch. 


6 Machine Learning Algorithms to Know About When Learning Data Science

Decision trees are models that resemble a tree like structure containing decisions and possible outcomes. They consist of a root node, which forms the start of our tree, decision nodes which are used to split the data based on a condition, and leaf nodes which form the terminal points of the tree and the final outcome. Once a decision tree has been formed, we can use it to predict values when new data is presented to it. ... Random Forest is a supervised ensemble machine learning algorithm that aggregates the results from multiple decision trees, and can be applied to classification and regression based problems. Using the results from multiple decision trees is a simple concept and allows us to reduce the problem of overfitting and underfitting experienced with a single decision tree. To create a Random Forest we first need to randomly select a subset of samples and features from the main dataset, a process known as “Bootstraping”. This data is then used to build a decision tree. Carrying out bootstrapping avoids issues of the decision trees being highly correlated and improves model performance.


Data science isn’t particularly sexy, but it’s more important than ever

Not only is data cleansing an essential part of data science, it’s actually where data scientists spend as much as 80% of their time. It has ever been thus. As Mike Driscoll described in 2009, such “data munging” is a “painful process of cleaning, parsing and proofing one’s data.” Super sexy! Now add to that drudgery the very real likelihood that enterprises, as excited as they are to jump into data science, many lack “a suitable infrastructure in place to start getting value out of AI,” as Jonny Brooks has articulated: The data scientist likely came in to write smart machine learning algorithms to drive insight but can’t do this because their first job is to sort out the data infrastructure and/or create analytic reports. In contrast, the company only wanted a chart that they could present in their board meeting each day. The company then gets frustrated because they don’t see value being driven quickly enough and all of this leads to the data scientist being unhappy in their role. As I have written before: “Data scientists join a company to change the world through data, but quit when they realize they’re merely taking out the data garbage.”


Top 7 Skills Required to Become a Data Scientist

Having a deep understanding of machine learning and artificial intelligence is a must to have to implement tools and techniques in different logic, decision trees, etc. Having these skill sets will enable any data scientist to work and solve complex problems specifically that are designed for predictions or for deciding future goals. Those who possess these skills will surely stand out as proficient professionals. With the help of machine learning and AI concepts, an individual can work on different algorithms and data-driven models, and simultaneously can work on handling large data sets such as cleaning data by removing redundancies. ... The base of establishing your career as a data science professional will require you to have the ability to handle complexity. One must ensure to have the capability to identify and develop both creative and effective solutions as and when required. You might face challenges in finding out ways to develop any solution that possibly needs to have clarity in concepts of data science by breaking down the problems into multiple parts to align them in a structured way.


The Psychology Of Courage: 7 Traits Of Courageous Leaders

Like so many complex psychological human characteristics, courage can be difficult to nail down. On the surface, courage seems like one of those “I know it when I see it” concepts. In my twenty years spent facilitating and coaching innovation, creativity, strategy and leadership programs, and in partnership with Dr. Glenn Geher of the Psychology Department of the State University of New York at New Paltz, I’ve identified behavioral attributes that often correlate with a person’s access to their courage. Each attribute has influential effects on organizational culture at all levels. Fostering these attributes in your own life (at work and beyond) and within your team can help you lead toward the courageous future you’re striving to achieve. ... Courage requires taking intentional risks. And the bigger the risk, the more courage it takes (and the bigger the outcome can be). Those who understand the importance of facing fear and being vulnerable, who accept that falling and getting up again is part of the journey, tend to have quicker access to their courage.


There is a path to replace TCP in the datacenter

"The problem with TCP is that it doesn't let us take advantage of the power of datacenter networks, the kind that make it possible to send really short messages back and forth between machines at these fine time scales," John Ousterhout, Professor of Computer Science at Stanford, told The Register. "With TCP you can't do that, the protocol was designed in so many ways that make it hard to do that." It's not like the realization of TCP's limitations is anything new. There has been progress to bust through some of the biggest problems, including in congestion control to solve the problem of machines sending to the same target at the same time, causing a backup through the network. But these are incremental tweaks to something that is inherently not suitable, especially for the largest datacenter applications (think Google and others). "Every design decision in TCP is wrong for the datacenter and the problem is, there's no one thing you can do to make it better, it has to change in almost every way, including the API, the very interface people use to send and receive data. It all has to change," he opined.


Typemock Simplifies .NET, C++ Unit Testing

When testing legacy code, you need to test small parts of the logic one by one, such as the behavior of a single function, method or class. To do that the logic must be isolated from the legacy code, he explained. As Jennifer Riggins explained in a previous post, unit testing differs from integration testing, which focuses on the interaction between these units or components, and catches errors at the unit level earlier, so the cost of fixing them is dramatically reduced. ... Typemock uses special code that can intersect with the flow of the software, and instead of calling the real code, it doesn’t matter whether it’s a real method or a virtual method, it can intercept it, and you can fake different things in the code, he said. Typemock has been around since 2004 when Lopian launched the company with Roy Osherove, a well-known figure in test-driven development. They first released Typemock Isolator in 2006, a tool for unit testing SharePoint, WCF and other .NET projects. Isolator provides an API helps users write simple and human-readable tests that are completely isolated from the production code.


Why Web 3.0 Will Change the Current State of the Attention Economy Drastically

The attention economy requires improvements, and Web 3.0 is capable of making them happen. In the foreseeable future, it will drastically change the interplay between consumers, advertisers and social media platforms. Web 3.0 will give power to the people. It may sound pompous, but it's true. How is that possible? Firstly, Web 3.0 will grant users ownership of their data, so you'll be able to treat your data like it's your property. Secondly, it will enable you to be paid for the work you are doing when making posts and giving likes on social media. Both options provide you with the opportunity to monetize the attention that you give and receive. The agreeable thing about Web 3.0 is that it's all about honest ownership. If a piece of art can be an NFT with easily traceable ownership, your data can be too. If you own your data, you can monetize or offer it on your terms, knowing who is going to use it and how. For instance, there is Permission, a tokenized Web 3.0 advertising platform that connects brands with consumers, with the latter getting crypto rewards for their data and engagement. 


Serverless-first: implementing serverless architecture from the transformation outset

While a serverless-first mindset provides a range of benefits, some businesses may be hesitant to make the transition due to concerns around cloud provider security, vendor lock-in, sunk costs from other strategies and ongoing issues with debugging and development environments. However, even among the most serverless-adverse, this mindset can provide benefits to a select part of an organisation. Take for example a bank’s operations. While the maintenance of a traditional network infrastructure is crucial for uptime of the underlying database, with a serverless approach they have the freedom to implement an agile mindset with consumer-facing apps and technologies as demand grows. Agile and serverless strategies typically go hand-in-hand, and both can encourage quick development, modification and adaptation. In relation to concerns around vendor lock-in, some organisations may look towards a cloud-agnostic strategy. However, writing software for multiple clouds removes the ability to use features offered by one specific cloud, meaning any competitive advantage of using a specific vendor is then lost. 


CISO in the Age of Convergence: Protecting OT and IT Networks

Pan Kamal, head of products at BluBracket, a provider of code security solutions, says one of the first steps an organization can take is to create an IT-OT convergence task force that maps out the asset inventory and then determine where IT security policy needs to be applied within the OT domain. “Review industry-specific cybersecurity regulations and prioritize implementation of mandatory security controls where called for,” Kamal adds. “I also recommend investing in a converged dashboard -- either off the shelf or create a custom dashboard that can identify vulnerabilities and threats and prioritize risk by criticality.” Then, organizations must examine the network architecture to see if secure connections with one-way communications -- via data diodes for example -- can eliminate the possibility of an intruder coming in from the corporate network and pivoting to the OT network Another key element is conducting a review of security policies related to both the equipment and the software supply chain, which can help identify secrets in code present in git repositories and help remediate them prior to the software ever being deployed.



Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik

Daily Tech Digest - October 29, 2021

How to become an entrepreneurial engineer and create your own career path

"To be a successful entrepreneurial engineer, you must wear two hats: one with a deep technical focus and the other focused on the goals of the business," said Loren Goodman, CTO and co-founder of InRule Technology. "This allows you to make decisions in real-time leveraging your understanding of diminishing returns on both fronts. The why, the what and the how are traditionally separated, and small changes to any part can have exaggerated effects on the others. You bring this thinking together—for example, knowing that a feature can be done in a fraction of the time if a small part was removed from scope and also knowing that that part is not core to the business need." Goodman stressed that entrepreneurial engineers must be curious about the bigger picture and be unafraid to take on challenging problems. They must also be success-focused, with a relentless passion for achieving the best solution to difficult problems, no matter how unrealistic things might seem. Finally, he said, a successful entrepreneurial engineer must be scrappy: "You are going to have to be comfortable working without all the necessary resources for a long time while still staying focused on your objectives."


Forensic Monitoring of Blockchains Is Key for Broader Industry Adoption

In the event that an adversary corrupts more than 1/3 of the master nodes in the BFT committee of any given epoch, it is then technically possible for said adversary to violate the safety and jeopardize the consensus by creating forks, resulting in two or more finalized blockchains. However, certain messages would need to be signed and sent by these nodes to make this happen, which can then be detected by the system immediately after a fork with a length of only one appears. The signed messages can then be used as irrefutable proof of the misbehavior. Those messages are embedded into the blockchain and can be obtained by querying master nodes for forked blockchains. This is what enables the forensic monitoring feature, which can identify as many Byzantine master nodes as possible, all while obtaining the proof from querying as few witnesses as possible. For example, two separate honest nodes, each having access to one of the two conflicting blockchains respectively, is sufficient for the proof.


Infrastructure-as-Code: 6 Best Practices for Securing Applications

Research from security platform provider Snyk reveals that many companies are only starting out on their IaC journey, with 63% just beginning to explore the technology and only 7% stating they’ve implemented IaC to meet current industry standards. And with this practice comes changes in responsibility: IaC further extends developers’ responsibility to include securing their code and infrastructure. Misconfigurations can easily introduce security risks if best practices are not followed. In fact, according to Gartner, “70% of attacks against containers will be from known vulnerabilities and misconfigurations that could have been remediated.” Often, security trails behind the usage of IaC, resulting in configuration issues that are only detected after applications are deployed. That doesn’t have to be the case. In fact, the best way to ensure every configuration is secure, while still benefiting from the speed and repeatability of IaC, is to build security testing for IaC into developers’ workflows, the same as other forms of code.


The shift from DevOps and security to DevSecOps: 5 key roadblocks

There is DevOps plus security, and then there’s DevSecOps. What’s the difference? In the first case, security is a third wheel. In the second, it’s the third leg of the stool—an integral part of the system that’s almost unnoticeable unless or until it disappears. Indeed, to be effective, security must be everywhere—throughout the pipeline used to build and deploy as well as the runtime environment. In the DevSecOps model, security is a shared responsibility for development, security and operations teams and throughout the entire IT lifecycle. However, many organizations are challenged to integrate, rather than just tack on, security measures. This is a huge issue when a company’s own security is at stake, but an increasing number of attacks on the software supply chain is leaving tens, hundreds, even thousands of organizations vulnerable. There are many granular recommendations for achieving DevSecOps. Here are the bigger-picture issues that your organization must address to move beyond security as an afterthought.


Agile Architecture - What Is It?

From the definition, the two very important terms emerge including, Emerging Design and Intentional Architecture. Emergent Design is the process of analyzing and extending the architecture just enough to implement and validate the next increment in the development cycle. Intentional Architecture is about seeing the big picture. Large corporations need to simultaneously respond to new business challenges with large-scale architectural initiatives. On large scale we can understand that to meet the business objective, multiple teams, products, and systems will be involved. In this case, Emergent Design is not enough as it is circumscribed in a single team. Without Intentional Architecture, we can have several problems such as difficulty integrating, validate and maintaining the fulfillment of non-functional system requirements, low reuse, redundancy of solutions, etc. The intentional architecture will give the teams a common objective/destination to be reached, allowing the alignment of efforts and the parallelization of the work of independent teams. In other words, it will be the guiding track, the glue between the teams' work.


NRA Reportedly Hit By Russia-Linked Ransomware Attack

The NRA did not immediately respond to Information Security Media Group's request for comment. But Andrew Arulanandam, managing director of public affairs for the NRA, took to Twitter to say: "NRA does not discuss matters relating to its physical or electronic security. However, the NRA takes extraordinary measures to protect information regarding its members, donors, and operations - and is vigilant in doing so." Allan Liska, a ransomware analyst at the cybersecurity firm Recorded Future, told NBC that Grief is "the same group" as Evil Corp. The news outlet verified that the information in the leaked files includes grant proposal forms, names of recent grant recipients, an email sent to a grant winner, a federal W-9 form and minutes from the organization's virtual meeting in September. Sam Curry, CSO of Cybereason, tells ISMG, "It's unlikely this is a strategic attack, but time will tell. The way it would be strategic is to further divide the left from the right in the U.S. … The most likely scenario is that it's motivated by greed, and it has the potential to inadvertently explode politically. The next move is in the NRA's hands."


Is the Indian SaaS Story Overhyped?

Experts watching the SaaS space opine that after Freshworks recent listing, global perception towards Indian SaaS companies has changed. Last month, Freshworks became the first Indian software maker to list on Nasdaq. “SaaS companies in India are gaining acceptance and attention from investors. Initially, investors were slow due to the nature of revenue which is a money sucker but as the customer base grew with a lower drop, the revenue started to look good. Things have changed a lot after Postman and Freshworks. Indian SaaS companies are now seriously looked at as potential unicorns,” said Anil Joshi, managing partner, Unicorn India Ventures. The SaaS ecosystem is relatively nascent in India and is led by players such as Freshworks, Capillary, Eka, etc., said Anurag Ramdasan, partner, 3one4 Capital. “While there are double-digit unicorns in Indian SaaS today, it’s still a very early ecosystem and we are seeing a lot of innovative SaaS in the seed to series A stage in India,” he said. Many companies that have become soonicorns and unicorns have great consumer stories and investors today look at India as a huge consumer story.


How do I select an SD-WAN solution for my business?

Network security is also gaining greater importance as cyber-security threats multiply, leading to cloud-based security techniques converging with SD-WAN in the SASE framework. But the transition to these technologies can be challenging, with significant support required from the SD-WAN partner. Therefore, enterprises need to evaluate SD-WAN providers based on three principal criteria. First, does the provider’s network reach align with the enterprise’s geographic locations and does the provider offer a Tier 1 IP backbone to realize the full performance advantages of SD-WAN? Second, does the provider offer a managed SD-WAN, including local internet or MPLS access, with end-to-end delivery, technical implementation support, and service assurance to help manage complexity? Third, does the provider have a clear SASE roadmap integral to its SD-WAN vision? This includes services like zero-trust network access (ZTNA) and cloud access security broker (CASB) for remote workers and cloud firewall and secure web gateway (SWG) to support the branch level.


The Rise of Event-Driven Architecture

In the REST framework, an API isn’t aware of the state of objects. The client queries the API to find out the state, and the role of the API is to respond to the client with the information. However, with an event-driven API, a client can subscribe to the API, effectively instructing it to monitor the state of objects and report back with real-time updates. Therefore, behavior shifts from stateless handling of repeatable, independent requests to stateful awareness of the virtual objects modeled on real-world operations. Event-driven APIs are a great way to meet the demands of modern end-users who expect customized and instantaneous access to information. Applying these APIs is easy to do in one-off, bespoke environments. However, things get more complicated when you need to offer this level of service at scale, and not every enterprise is ready to handle that level of complexity. To avoid amassing significant technical debt, organizations and developers should offload this complexity to a third party with the capabilities to synchronize digital experiences in real-time and at scale.


We Are Testing Software Incorrectly and It's Costly

The tests you write are tightly coupled to the underlying design of your code. Design is constantly evolving. You now not only have to refactor the designs of your production code — you have to change your tests, too! In other words, your tests should help you with the refactoring, giving confidence, but instead, it is only making you work harder and it's giving no confidence of things still working correctly. I will not even mention the mock hell for brevity (please Google about it). But instead of abandoning refactoring or unit tests, all you need to do is free yourself from the mistaken definition of "unit testing." Focus on testing behaviors! Instead of writing unit tests for every public method of every class, write unit tests for every component (i.e., user, product, order, etc.), covering every behavior of each component and focusing on the public interface of the unit. To achieve that, you will need to learn how to structure your code properly. Please don't package your code by technical concerns (controllers, services, repositories, etc.). Senior devs structure their code by domain.



Quote for the day:

"The world's greatest achievers have been those who have always stayed focussed on their goals and have been consistent in their efforts." -- Roopleen

Daily Tech Digest - October 17, 2021

Multi-User IP Address Detection

When an Internet user visits a website, the underlying TCP stack opens a number of connections in order to send and receive data from remote servers. Each connection is identified by a 4-tuple (source IP, source port, destination IP, destination port). Repeating requests from the same web client will likely be mapped to the same source port, so the number of distinct source ports can serve as a good indication of the number of distinct client applications. By counting the number of open source ports for a given IP address, you can estimate whether this address is shared by multiple users. User agents provide device-reported information about themselves such as browser and operating system versions. For multi-user IP detection, you can count the number of distinct user agents in requests from a given IP. To avoid overcounting web clients per device, you can exclude requests that are identified as triggered by bots and we only count requests from user agents that are used by web browsers. There are some tradeoffs to this approach: some users may use multiple web browsers and some other users may have exactly the same user agent. 


Critical infrastructure security dubbed 'abysmal' by researchers

"While nation-state actors have an abundance of tools, time, and resources, other threat actors primarily rely on the internet to select targets and identify their vulnerabilities," the team notes. "While most ICSs have some level of cybersecurity measures in place, human error is one of the leading reasons due to which threat actors are still able to compromise them time and again." Some of the most common issues allowing initial access cited in the report include weak or default credentials, outdated or unpatched software vulnerable to bug exploitation, credential leaks caused by third parties, shadow IT, and the leak of source code. After conducting web scans for vulnerable ICSs, the team says that "hundreds" of vulnerable endpoints were found. ... Software accessible with default manufacturer credentials allowed the team to access the water supply management platform. Attackers could have tampered with water supply calibration, stop water treatments, and manipulate the chemical composition of water supplies.


What is a USB security key, and how do you use it?

There are some potential drawbacks to using a hardware security key. First of all, you could lose it. While security keys provide a substantial increase in security, they also provide a substantial increase in responsibility. Losing a security key can result in a serious headache. Most major websites suggest that you set up backup 2FA methods when enrolling a USB security key, but there's always a small but real chance that you could permanently lose access to a specific account if you lose your key. Security-key makers suggest buying more than one key to avoid this situation, but that can quickly get expensive. Cost is another issue. A hardware security key is the only major 2FA method for which you have to spend money. You can get a basic U2F/WebAuthn security key standards for $15, but some websites and workplaces require specialized protocols for which compatible keys can cost up to $85 each. Finally, limited usability is also a factor. Not every site supports USB security keys. If you're hoping to use a security key on every site for which you have an account, you're guaranteed to come across at least a few that won't accept your security key.


Future-proofing the organization the ‘helix’ way

The leaders need a high level of domain expertise, obviously, but other skills as well. As capability managers, these leaders must excel at strategic workforce management, for example—not short-sighted resource attribution for the products at hand, but the strategic foresight and long-term perspective to understand what the workload will be today, tomorrow, three to five years from now. They need to understand what skills they don’t have in-house and must acquire or build. These leaders become supply-and-demand managers of competence. They must also be excellent—and rigid—portfolio managers who make their resource decisions in line with the overall transformation. The R&D organization, for example, cannot start research projects inside a product line whose products are classified as “quick return,” even if they have people idle. It’s a different mindset. In fact, R&D leaders don’t necessarily have to be the best technologists in order to be successful. They must be farsighted and able to anticipate trends—including technological trends—but ultimately what matters is their ability to build the department in a way that ensures it’s ready to carry the demands of the organization going forward.


Robots Will Replace Our Brains

Over the years, despite numerous fruitless attempts, no one has come close to achieving the recreation of this organ with all its intricate details; it is challenging to fathom such an invention in the scientific world at this point, considering the discoveries that surface every other day. As one research director mentions, we are very good at gathering data and developing algorithms to reason with that data. Nevertheless, that reasoning is only as sound as the data, one step removed from reality for the AI we have now. For instance, all science fiction movies conceive movies that depict only a mere thin line that separates human intelligence from artificial intelligence. ... A new superconducting switch is being constructed by researchers at the U.S. National Institute of Standards and Technology (NIST) and updated that will soon enable computers to analyze and make decisions just like humans do. The conclusive goal is to integrate this switch into everyday life; from transportation to medicine, this invention also contains an artificial synapse, processes electrical signals just like a biological synapse does, and converts it to an adequate output, just like the brain does.


Data Storage Strategies - Simplified to Facilitate Quick Retrieval of Data and Security

No matter what the reason for the downtime, it may be very costly. An efficient data strategy goes beyond just deciding where data will be kept on a server. When it comes to disaster recovery, hardware failure, or a human mistake, it must contain methods for backing up the data and ensuring that it is simple and fast to restore. Putting in place a disaster recovery plan, although it is a good start and guarantees that data and the related systems are available after a minimum of disruption is experienced. Cloud-based disaster recovery, as well as virtualization, are now required components of every disaster recovery strategy. They can work together to assure you that no customer will ever experience more downtime than they can afford at any given moment. By relying only on the cloud storage service, the company can outsource the storage issue. By using online data storage, the business can minimize the costs associated with internal resources. With this technology, the business does not need any internal resources or assistance to manage and keep their data; the Data warehousing consulting services provider takes care of everything. 


RISC-V: The Next Revolution in the Open Hardware Movement

You could always build your own proprietary software and be better than your competitors, but the world has changed. Now almost everyone is standing on the shoulders of giants. When you need an operating system kernel for a new project, you can use Linux directly. No need to recreate a kernel from scratch, and you can also modify it for your own purpose (or write your drivers). You’ll be certain to rely on a broadly tested product because you are just one of a million users doing the same. That would be exactly what relying on an open source CPU architecture could provide. No need to design things from scratch; you can innovate on top of the existing work and focus on what really matters to you, which is the value you are adding. At the end of the day, it means lowering the barriers to innovate. Obviously, not everyone is able to design an entire CPU from scratch, and that’s the point: You can bring only what you need or even just enjoy new capabilities provided by the community, exactly the same way you do with open source software, from the kernel to languages.


The Conundrum Of User Data Deletion From ML Models

As the name says, approximation deletion enables us to eliminate the majority of the implicit data associated with users from the model. They are ‘forgotten,’ but only in the sense that our models can be retrained at a more opportune time. Approximate deletion is particularly useful for rapidly removing sensitive information or unique features associated with a particular individual that could be used for identification in the future while deferring computationally intensive full model retraining to times of lower computational demand. Approximate deletion can even accomplish the exact deletion of a user’s implicit data from the trained model under certain assumptions. The deletion challenge has been tackled differently by researchers than by their counterparts in the field. Additionally, the researchers describe a novel approximate deletion technique for linear and logistic models that is feature-dimensionally linear and independent of training data. This is a significant improvement over conventional systems, which are superlinearly dependent on the extent at all times.


9 reasons why you’ll never become a Data Scientist

Have you ever invested an entire weekend in a geeky project? Have you ever spent your nights browsing GitHub while your friends were out to party? Have you ever said no to doing your favorite hobby because you’d rather code? If you could answer none of the above with yes, you’re not passionate enough. Data Science is about facing really hard problems and sticking at them until you found a solution. If you’re not passionate enough, you’ll shy away at the sight of the first difficulty. Think about what attracts you to becoming a Data Scientist. Is it the glamorous job title? Or is it the prospect of plowing through tons of data on the search for insights? If it is the latter, you’re heading in the right direction. ... Only crazy ideas are good ideas. And as a Data Scientist, you’ll need plenty of those. Not only will you need to be open to unexpected results — they occur a lot! But you’ll also have to develop solutions to really hard problems. This requires a level of extraordinary that you can’t accomplish with normal ideas. 


Why Don't Developers Write More Tests?

If deadlines are tight or the team leaders aren’t especially committed to testing, it is often one of the first things software developers are forced to skip. On the other hand, some developers just don’t think tests are worth their time. “They might think, ‘this is a very small feature, anyone can create a test for this, my time should be utilized in something more important.’” Mudit Singh of LambdaTest told me. ... In truth, there are some legitimate limitations to automated tests. Like many complicated matters in software development, the choice to test or not is about understanding the tradeoffs. “Writing automated tests can provide confidence that certain parts of your application work as expected,” Aidan Cunniff, the CEO of Optic told me, “but the tradeoff is that you’ve invested a lot of time ‘stabilizing’ and making ‘reliable’ that part of your system.” ... While tests might have made my new feature better and more maintainable, they were technically a waste of time for the business because the feature wasn’t really what we needed. We failed to invest enough time understanding the problem and making a plan before we started writing code.



Quote for the day:

"Leaders are readers, disciples want to be taught and everyone has gifts within that need to be coached to excellence." -- Wayde Goodall

Daily Tech Digest - July 19, 2021

IoT security: Development and defense

While IoT adoption continues to grow, the standards, compliance requirements and secure coding practices surrounding IoT have not advanced at the same rate. Recent high profile software supply chain attacks have brought the issue of secure coding into sharp focus, prompting the Biden administration to issue an executive order addressing new requirements for federal agencies to only purchase and deploy secure software. This pivotal shift will have an immediate impact on global software development processes and lifecycles, especially when you consider the vast reach of U.S. federal procurement. Virtually all device manufacturers and software companies will be impacted directly as the administration begins to increase obligations on the private sector and establish new security standards across the industry. Specific to IoT, the order directs the federal government to initiate pilot programs to educate the public of the security capabilities of IoT devices, and to identify IoT cybersecurity criteria and secure software development practices for a consumer-labeling program.


Efficient unit-testing with a containerised database

The real problem is mixing two languages in one body of code. The dbUtil handle is just a boilerplate reduction device here. The raw SQL is still there. We still can’t test the complex individual statements separate from the simple yet crucial control logic captured in the if-statements, which depend solely on the state of the person object, not on the database. Sure, we can test this control logic fine if we mock out the calls to the database. The mock for dbUtil returns a prepared list of person objects, and we can verify the correct invocation of it for the two different conditions. That unavoidably leaves the SQL untested. If we want to test the execution of these statements, we need to run the entire code inside the for loop, this time using a real database. That test needs to set up the conditions for all the three execution paths (condition 1, 1 and 2, or none), as well as verify what happened to the state after executing the void statement executions. It can be done, but we are of necessity testing both the Java and SQL realms here. That’s hardly the lean unit testing we’re looking for.


Ansible vs Docker: A Detailed Comparison Of DevOps Tools

Ansible is an open-source automation engine that helps in DevOps and comes to the rescue to improve your technological environment’s scalability, consistency, and reliability. It is mainly used for rigorous IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning. In recent times, Ansible has become the top choice for software automation in many organizations. Automation is one of the most crucial aspects of industries these days. Unfortunately, many IT environments are too complex and often require to be scaled too quickly for system administrators and developers to keep up, rather than manually. ... Docker is an open-source platform application for developing, shipping, and running applications. It enables developers to package applications into containers, a set of standardized and executable components that combine the application source code with the operating system libraries and dependencies required to run that code in an executable environment. Containers can even be created without Docker, but the platform and user interface make it easier, simpler, and safer to build, deploy and manage containers. 


Delegation and Scale: How Remote Work Affected Various Industries

The basic goal of delegation of authority is to enable efficient organization. Just as no single individual in a company can do all of the tasks required to achieve a group's goals, it becomes arduous for the management to wield all decision-making authority as a business expands. This is because there is a limit to the number of people a manager can successfully monitor and make decisions. When this threshold is reached, the authority must be handed to subordinates. While centralization was still a possibility before the pandemic, this was no longer the case after back-to-back lockdowns and economic slowdowns. In such a situation, the delegation came as a boon that not only kept the workflow active but also helped in scaling the growth. ... Delegating gives your team greater confidence, makes them feel important, and allows them to demonstrate their abilities. This will result in mutual appreciation with colleagues motivating one another to work more, and staying devoted to attaining the goals. 


Seeking a Competitive Edge vs. Chasing Savings in the Cloud

If companies do not make changes to their IT operations in response to a migration, finding savings can be more difficult, L’Horset says. “In the industry, there’s a lot of debate: Is cloud saving you money or not? Our research indicates that even at the basic level, yes it does,” he says. “The difference between the cost-savings, which you can get through cloud, and the value of innovation that you absolutely can and should get through cloud, is the fundamental reason you should go.” Roy Illsley, chief analyst with Omdia, the research arm of Informa Tech, says the cost benefits of cloud can be positive if the workload is variable in its resource requirements, its resource requirements match the cloud providers packaging of resources, or it requires high availability. "If the workload is stable in its resource requirements then on-premises is more cost effective," he says. Respondent companies to the Accenture survey that did not list cloud as a top priority still saw significant cost-savings, says Jim Wilson, managing director of information technology and business research at Accenture Research. 


7 Ways AI and ML Are Helping and Hurting Cybersecurity

AI/MI is used in network traffic analysis, intrusion detection systems, intrusion prevention systems, secure access service edge, user and entity behavior analytics, and most technology domains described in Gartner's Impact Radar for Security. In fact, it's hard to imagine a modern security tool without some kind of AI/ML magic in it. ... Through social engineering and other techniques, ML is used for better victim profiling, and cybercriminals leverage this information to accelerate attacks. For example, in 2018, WordPress websites experienced massive ML-based botnet infections that granted hackers access to users' personal information; ... Ransomware is experiencing an unfortunate renaissance. Examples of criminal success stories are numerous; one of the nastiest incidents led to Colonial Pipeline's six-day shutdown and $4.4 million ransom payment; ... ML algorithms can create fake messages that look like real ones and aim to steal user credentials. In a Black Hat presentation, John Seymour and Philip Tully detailed how an ML algorithm produced viral tweets with fake phishing links that were four times more effective than a human-created phishing message.


Electronic signatures: please sign on the digital line

First, let’s look at the importance of content to a business. In simple terms, content is the inherent value of a company. It’s NASA’s designs for their new space station, AstraZeneca’s highly regulated pharmaceutical patents, and Oxfam’s humanitarian aid records. It’s the clinical trial results for the next breakthrough vaccine, or the blueprint for the innovative new approach to flooding solutions. Content is the entire work of an organisation and is completely unique for every company. Content is the database of its most valuable insights. But to effectively realise this value, organisations need to find a single place for their content. Separating content between different silos and applications creates friction, which can stand in the way of employees accessing and sharing information, inhibiting innovation and productivity. Applications in today’s content-driven world are often judged by their ease of integration with other technologies. As a result, businesses are turning to single platforms where content can be securely stored and managed, while all compliance requirements are met and all teams have the opportunity to collaborate on the content, both internally and externally.


Protect your smartphone from radio-based attacks

An IMSI catcher is equipment designed to mimic a real cell tower so that a targeted smartphone will connect to it instead of the real cell network. Various techniques may be employed to do it, such as masquerading as a neighboring cell tower or jamming the competing 5G/4G/3G frequencies with white noise. After capturing the targeted smartphone’s IMSI (the ID number linked to its SIM card), the IMSI catcher situates itself between the phone and its cellular network. From there, the IMSI catcher can be used to track the user’s location, extract certain types of data from the phone, and in some cases even deliver spyware to the device. Unfortunately, there’s no surefire way for the average smartphone user to notice/know that they’re connected to a fake cell tower, though there may be some clues: perhaps a noticeably slower connection or a change in band in the phone’s status bar (from LTE to 2G, for example). Thankfully, 5G in standalone mode promises to make IMSI catchers obsolete, since the Subscription Permanent Identifier (SUPI) – 5G’s IMSI equivalent – is never disclosed in the handshake between smartphone and cell tower. 


The value of data — a new structural challenge for data scientists

Some companies with data scientists in place have difficulty operationalising their skills. If we look at the volumes of data processed by organisations, the different structures and architectures, it is not imperative to have a data scientist in its ranks of data experts. For companies managing an astronomical amount of data, on multiple channels and with a complex structure, the expertise of a data scientist will prove beneficial in modeling data, query it and make predictions. One of the first questions to ask is therefore related to data and business needs and to organise the structure according to an organisation’s structure and its data strategy. Companies have also realised that having a data scientist was not the answer to their data value problems. This is partly due to a lack of understanding in the environment surrounding data. A data scientist may understand the data, but not its purposes and environments or business applications. Let’s take the example of a marketing department working on implementing AI to accelerate its web ROI. 


Interview With Prof B Ravindran, Head, Robert Bosch Centre For Data Science & AI

Interpretability of deep learning models is essential for widespread adoption of these techniques in the Medical image diagnosis community. Deep learning models have been phenomenally successful at beating state of the art in common medical image diagnosis tasks like segmentation and screening applications, e.g. classification of diabetic retinopathy and chest X-ray scans, among others. While these successes have created huge interest in adopting these techniques in clinical practice, a huge barrier in adoption is the lack of interpretability of these models. Convolutional Neural Networks with hundreds of layers is the workhorse for medical image diagnosis. While the initial layers are typically edge detectors and shape detectors, it is fairly impossible to explain or interpret the feature maps as one goes deeper into the network. In order for clinicians to trust the output from these networks, it is essential that a mechanism for explaining the output be present. In addition, black-box techniques will make it hard for clinicians to justify the diagnosis and follow up procedures.



Quote for the day:

"Honor bespeaks worth. Confidence begets trust. Service brings satisfaction. Cooperation proves the quality of leadership." -- James Cash Penney