6 factors to consider while selecting any Algorithm Library

Processing geometric inputs play a crucial role in the product development cycle. Ever since the introduction of complex algorithm libraries, the NPD landscape has changed drastically, and for good. Typically, a well suitable library streamlines the work process by executing complicated tasks using a wide array of functions.

An algorithm library basically works on the principle where it is fed with specific instructions to execute in a way with functionalities customised with it. For example, in manufacturing industry; there is a term known as point cloud library and it holds its expertise in converting millions of point cloud data into mesh models.

There are particular algorithms to perform numerous perplexing tasks. There are platforms that use specific and unique functionalities and programming to get the job done. Manufacturing requirements, end product objectives lay down the necessities for choosing a particular algorithm library. This article sheds a light on 6 key factors to consider while selecting any algorithm library.

Required functionality

Once data has been fed and stored, methods for compressing this kind of data become highly interesting. The different algorithm libraries come up with their own set of functionalities. Ideally, functionalities are best when developed by in-house development team, to suit up in accordance with design objectives. It is a good practice to develop functionalities to address complex operations as well as simple tasks. It is also essential to develop functions which might be of need down the line. In the end, one’s objective defines what functionality laced algorithm library will be in use.  

Data Size and Performance

A huge data can be challenging to handle and share between project partners. A large data is directly proportional to a large processing time. All the investments in hardware and quality connections will be of little use if one is using poor performing library. An algorithm library that allows for the process of multiple scans simultaneously has to be the primary preference. One should also have a good definition of the performance expectations from the library, depending on your application whether real time or batch mode.

Processing speed

Libraries that automate manual processes often emphasize on processing speed, delivering improvements to either the processing or modeling. This allows for faster innovation and often better, yet singular, products. As witnessed in the case of point cloud, the ability to generate scan trees after a dataset has been processed greatly improves efficiency. A system will smooth interface that permits fast execution, greatly reduces the effort and time taken to handle large datasets.

Make versus Buy

This situation drops in at the starting phases of processing. Let us take an example of point cloud libraries. Some of the big brands producing point cloud processing libraries are Autodesk, Bentley, Trimble, and Faro. However, most of these systems arrive as packages with 3D modelling, thereby driving up costs. If such is the case, it is advisable to form an in-house point cloud library that suits the necessities. Nowadays, many open source platforms give out PCL to get the job done which has proven to be quite beneficial.

Commercial Terms

The commercial aspect also plays a vital role in while choosing an algorithmic library. Whether to opt for single or recurring payment depends upon the volume and nature of the project.  

There are different models to choose from, if one decides to go with licensing a commercial library:

A: Single payment: no per license fees, and an optional AMC

B: Subscription Based: Annual subscription, without per license fees

C: Hybrid: A certain down payment and per license revenue sharing

Whatever option you select, make sure there is a clause in the legal agreement that caps the increase in the charges to a reasonable limit.

Storage, Platforms and Support

Storage has become less of an issue than what it was even a decade ago. Desktops and laptops with more than a terabyte of capacity are all over the market. Not every algorithm library requires heavy graphics. Investing in a quality graphics card is only important if your preferred library demands heavy graphic usage. That doesn’t mean investing in cheap hardware and storage systems available. A quality processor with lot of RAM is decent if the processing task is CPU and memory intensive. Another point to look into, is the type of platform or interface to be exact, the algorithm library supports. Varied requirements call for varied platforms such as Microsoft, Mac, and Linux. The usage, and licensing should be taken into account before selecting an interface.

Last but not the least, it is to mention that the inputs from customers are highly significant and there has to be a robust support system to address any grievance from the customer side. Having a trained support staff or a customised automated support system must be given high priority.

Read More

Image Processing

Previously we have learned what visual inspection is and how it helps in inspection checks and quality assurance of manufactured products. The task of vision-based inspection implements a specific technological aspect with the name of Image Processing.

Image processing is a technique to carry out a particular set of actions on an image for obtaining an enhanced image or extracting some valuable information from it.

It is a sort of signal processing where the input is an image, and output may be an improved image or characteristics/features associated with the same. Over the years, image processing has become one of the most rapidly growing technologies within engineering and even the computer science sector too.

Image processing consists of these three following steps:

  • Importing the image via image capturing tools;
  • Manipulating and analyzing the image;
  • Producing a result where the output can be an altered image or report that is based on image analysis.

Image processing includes the two types of method: 

Analogue Image Processing: Generally, analogue image processing is used for hard copies like photographs and printouts. Image analysts use various facets of interpretation while using these visual techniques.

Digital image processing: Digital image processing methods help in the manipulation and analysis of digital images. The three general steps that all types of data have to undergo while using digital image processing techniques are -  pre-processing, enhancement, and information extraction.

This article discusses primarily digital image processing techniques and various phases.

 

Image processing

 

Digital Image Processing and different phases

Digital image processing requires digital computers to convert images into digital form using digital conversion method and then process it. It is about subjecting various numerical depictions of images to a series of operations to obtain the desired result. The primary advantages of Digital Image Processing methods lie in its versatility, repeatability and the preservation of original data.

Main techniques of digital image processing are as follows:

  • Image Editing: It means changing/altering digital images with the use of graphic software tools.
  • Image Restoration: It means processing a corrupt image and taking out a clean original image to get back the lost information.
  • Independent Component Analysis: It separates a variety of signals computationally into additive subcomponents.
  • Anisotropic Diffusion: This method enables reducing image noise without having to remove essential portions of the image.
  • Linear Filtering. It’s another digital image processing method, which is about processing time-varying input signals and generating output signals.
  • Neural Networks: Neural networks are the computational models used in machine learning for solving various tasks.
  • Pixelation: It is a method for turning printed images into digitized ones.
  • Principal Components Analysis: It is a digital image processing technique that is used for feature extraction.
  • Partial Differential Equations: This method refers to dealing with de-noising
  • Hidden Markov Models: This technique is used for image analysis in 2D (two dimensional).
  • Wavelets: Wavelets are the mathematical functions used in image compression.
  • Self-organizing Maps: a digital image processing technique that classifies images into several classes.

Image recognition technology has grown up to be of great potential for wide adoption in various industries. This technology has seen significant usage with each passing year, as enterprises have become more time-efficient and productive due to the incorporation of better manufacturing, inspection and quality assurance tools and processes. Big corporations and start-ups such as Tesla, Google, Uber, Adobe Systems, etc heavily use image processing techniques in their day to day operations. With the advancements in the field of AI (Artificial Intelligence), this technology will see significant upgrades in the coming years.

Read More

Image Processing Algorithms

Till now, we have read about Image processing being a technique to carry out a particular set of actions on an image for obtaining an enhanced image or extracting some valuable information from it. The input is an image, and output may be an improved image or characteristics/features associated with the same.

It is essential to know that computer algorithms have the most significant role in digital image processing. Developers have been using and implementing multiple algorithms to solve various tasks, which include digital image detection, image analysis, image reconstruction, image restoration, image enhancement, image data compression, spectral image estimation, and image estimation. Sometimes, the algorithms can be straight off the book or a more customized amalgamated version of several algorithm functions.

Image processing algorithms commonly used for complete image capture can be categorized into:

Low-level techniques, such as color enhancement and noise removal,

Medium-level techniques, such as compression and binarization,

and higher-level techniques involving segmentation, detection, and recognition algorithms extract semantic information from the captured data.

 

 

Types of Image Processing Algorithms

Some of the conventional image processing algorithms are as follows:

Contrast Enhancement algorithm: Colour enhancement algorithm is further subdivided into -

  • Histogram equalization algorithm: Using the histogram to improve image contrast
  • Adaptive histogram equalization algorithm: It is the histogram equalization which adapts to local changes in contrast
  • Connected-component labeling algorithm: It is about finding and labeling disjoint regions

Dithering and half-toning algorithm: Dithering and half-toning includes of the following -

  • Error diffusion algorithm
  • Floyd–Steinberg dithering algorithm
  • Ordered dithering algorithm
  • Riemersma dithering algorithm

Elser difference-map algorithm: It is a search algorithm used for general constraint satisfaction problems. It was used initially for X-Ray diffraction microscopy.

Feature detection algorithm: Feature detection consists of -

  • Marr–Hildreth algorithm: It is an early edge detection algorithm
  • Canny edge detector algorithm: Canny edge detector is used for detecting a wide range of edges in images.
  • Generalized Hough transform algorithm
  • Hough transform algorithm
  • SIFT (Scale-invariant feature transform) algorithm: SIFT is an algorithm to identify and define local features in images.
  • SURF (Speeded Up Robust Features) algorithm: SURF is a robust local feature detector.

Richardson–Lucy deconvolution algorithm: This is an image deblurring algorithm.

Blind deconvolution algorithm: Much like Richardson–Lucy deconvolution, it is an image de-blurring algorithm when point spread function is unknown.

Seam carving algorithm: Seam carving algorithm is a content-aware image resizing algorithm

Segmentation algorithm: This particular algorithm parts a digital image into two or more regions.

  • GrowCut algorithm: an interactive segmentation algorithm
  • Random walker algorithm
  • Region growing algorithm
  • Watershed transformation algorithm: A class of algorithms based on the watershed analogy

It is to note down that apart from the algorithms mentioned above, industries also create customized algorithms to address their needs. They can be either right from the scratch or a combination of various algorithmic functions. It is safe to say that with the evolution of computer technology, image processing algorithms have provided sufficient opportunities for multiple researchers and developers for investigation, classification, characterization, and analysis of various hordes of images.

Read More

Mesh

For those acquainted with mechanical design and reverse engineering, they can testify to the fact that the road to a new product design involves several steps. In reverse engineering, the summary of the entire process involves scanning, point cloud generation, meshing, computer-aided designing, prototyping and final production. This section covers a very crucial part of the process — Meshing or simply put, Mesh.

To put a simple definition, a mesh is a network that constitutes of cells and points.

Mesh generation is the practice of converting the given set of points into a consistent polygonal model that generates vertices, edges and faces that only meet at shared edges. It can have almost any shape in any size. Each cell of the mesh represents an individual solution, which when combined, results in a solution for the entire mesh.

 

mesh

Mesh is formed of facets which are connected to each other topologically. The topology is created using following entities:

  • Facet - A triangle connecting three data points
  • Edge - A line connecting two data points
  • Vertex - A data point
Mesh Property

Before we proceed to know the types of meshes, it is necessary to understand the various aspects that constitute a mesh. It is important to know the concept of a polygonal mesh.

A polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D graphics and solid modeling. The faces usually consist of triangles, quadrilaterals or other simple polygons as that simplifies rendering. It may also be composed of more general concave polygons or polygons with holes.

Objects created with polygon meshes must store different types of elements. These include:

  • Vertex: A position (usually in 3D space) along with other information such as color, normal vector and texture coordinates
  • Edge: A connection between two vertices
  • Face: A closed set of edges, in which a triangle face has three edges, and a quad face has four edges
  • Surfaces: They are often called smoothing groups. Generally, surfaces are not required to group smooth regions

A polygon mesh may be represented in a variety of ways, using different methods to store the vertex, edge and face data. These include:

  • Face-vertex meshes
  • Winged edge meshes
  • Corner tables
  • Vertex-vertex meshes
Types of meshes

Meshes are commonly classified into two divisions, Surface mesh and Solid mesh. Let us go through each section one by one.

Surface Mesh: A surface mesh is a representation of each individual surface constituting a volume mesh. It consists of faces (triangles) and vertices. Depending on the pre-processing software package, feature curves may be included as well.

Generally, a surface mesh should not have free edges and the edges should not be shared by two triangles.

The surface should ideally contain the following qualities of triangle faces:

  • Equilateral sized triangles
  • No sharp angles/surface folds etc. within the triangle proximity sphere
  • Gradual variation in triangle size from one to the next

The surface mesh generation process should be considered carefully. It has a direct influence on the quality of the resulting volume mesh and the effort it takes to get to this step.

surface mesh

Solid Mesh: Solid mesh, also known as volume mesh, is a polygonal representation of the interior volume of an object. There are three different types of meshing models that can be used to generate a volume mesh from a well prepared surface mesh.

The three types of meshing models are as follows:

  • Tetrahedral - tetrahedral cell shape based core mesh
  • Polyhedral - polyhedral cell shape based core mesh
  • Trimmed - trimmed hexahedral cell shape based core mesh

Once the volume mesh has been built, it can be checked for errors and exported to other packages if desired.

solid mesh

Mesh type as per Grid structure

A grid is a cuboid that covers entire mesh under consideration. Grid mainly helps in fast neighbor manipulation for a seed point.

mesh grid

Meshes can be classified into two divisions from the grid perspective, namely Structured and Unstructured mesh. Let us have a look at each of these types.

Structured Mesh: Structured meshes are meshes which exhibits a well-known pattern in which the cells are arranged. As the cells are in a particular order, the topology of such mesh is regular. Such meshes enable easy identification of neighboring cells and points, because of their formation and structure. Structured meshes are applied over rectangular, elliptical, spherical coordinate systems, thus forming a regular grid. Structured meshes are often used in CFD.

structured mesh

Unstructured Mesh: Unstructured meshes, as the name suggests, are more general and can randomly form any geometry shape. Unlike structured meshes, the connectivity pattern is not fixed hence unstructured meshes do not follow a uniform pattern. However, unstructured meshes are more flexible. Unstructured meshes are generally used in complex mechanical engineering projects.

Unstructured Mesh

Read More

Mesh Quality

The quality of a mesh plays a significant role in the accuracy and stability of the numerical computation. Regardless of the type of mesh used in your domain, checking the quality of your mesh is a must. The ‘good meshes’ are the ones that produce results with fairly acceptable level of accuracy, considering that all other inputs to the model are accurate. While evaluating whether the quality of the mesh is sufficient for the problem under modeling, it is important to consider attributes such as mesh element distribution, cell shape, smoothness, and flow-field dependency.

Element Distribution

It is known that meshes are made of elements (vertices, edges and faces). The extent, to which the noticeable features such as shear layers, separated regions, shock waves, boundary layers, and mixing zones are resolved, relies on the density and distribution of mesh elements. In certain cases, critical regions with poor resolution can dramatically affect results. For example, the prediction of separation due to an adverse pressure gradient depends heavily on the resolution of the boundary layer upstream of the point of separation.

Cell Quality

The quality of a cell has a crucial impact on the accuracy of the entire mesh. The quality of cell is analyzed by the virtue of three aspects: Orthogonal quality, Aspect ratio and Skewness.

Orthogonal Quality: An important indicator of mesh quality is an entity referred to as the orthogonal quality. The worst cells will have an orthogonal quality close to 0 and the best cells will have an orthogonal quality closer to 1.

Aspect Ratio: Aspect ratio is an important indicator of mesh quality. It is a measure of stretching of the cell. It is computed as the ratio of the maximum value to the minimum value of any of the following distances: the normal distances between the cell centroid and face centroids and the distances between the cell centroid and nodes.

Skewness: Skewness can be defined as the difference between the shape of the cell and the shape of an equilateral cell of equivalent volume. Highly skewed cells can decrease accuracy and destabilize the solution.

Smoothness

Smoothness redirects to truncation error which is the difference between the partial derivatives in the equations and their discrete approximations. Rapid changes in cell volume between adjacent cells results in larger truncation errors. Smoothness can be improved by refining the mesh based on the change in cell volume or the gradient of cell volume.

Flow-Field Dependency

The entire effects of resolution, smoothness, and cell shape on the accuracy and stability of the solution process is dependent upon the flow field being simulated. For example, skewed cells can be acceptable in benign flow regions, but they can be very damaging in regions with strong flow gradients.

Correct Mesh Size

Mesh size stands out as one of the most common problems to an equation. The bigger elements yield bad results. On the other hand, smaller elements make computing so long that it takes a long amount of time to get any result. One might never really know where exactly is the mesh size is on the scale.

It is important to consider chosen analysis for different mesh sizes. As smaller mesh means a significant amount of computing time, it is important to strike a balance between computing time and accuracy. Too coarse mesh leads to erroneous results. In places where big deformations/stresses/instabilities take place, reducing element sizes allow for greatly increased accuracy without great expense in computing time.

Read More

Meshing Algorithms

In the previous session, we have learned what Mesh is and the various aspects upon which a mesh can be classified. Mesh generation requires expertise in the areas of meshing algorithms, geometric design, computational geometry, computational physics, numerical analysis, scientific visualization and software engineering to create a mesh tool.

Over the years, mesh generation technology has evolved shoulder to shoulder with increasing hardware capability. Even with the fully automatic mesh generators there are many cases where the solution time is less than the meshing time. Meshing can be used for wide array of applications, however the principal application of interest is the finite element method. Surface domains are divided into triangular or quadrilateral elements, while volume domain is divided mainly into tetrahedral or hexahedral elements. A meshing algorithm can ideally define the shape and distribution of the elements.

A key step of the finite element method for numerical computation is mesh generation algorithms. A given domain is to be partitioned it into simpler ‘elements’. There should be few elements, but some portions of the domain may need small elements so that the computation is more accurate there. All elements should be ‘well shaped’. Let us take a walkthrough of different meshing algorithms based of two common domains, namely quadrilateral/hexahedral mesh and triangle/tetrahedral mesh.

Algorithm methods for Quadrilateral or Hexahedral Mesh

Grid-Based Method

The grid based method involves the following steps:

  • A user defined grid is fitted on 2D & 3D object. It generates quad/ hex elements on the interior of the object.
  • Some patterns are defined for boundary elements followed by forming a boundary element by applying boundary intersection grid.
  • This results in the generation of quadrilateral mesh model.

Mesh Grid based method

 

Medial Axis Method

Medial axis method involves an initial decomposition of the volumes. The method involves few steps as given below:

  • Consider a 2D object with hole.
  • A maximal circle is rolled through the model and the centre of circle traces the medial object.
  • Medial object is used as a tool for automatically decomposing the model in to simple meshable region.
  • Series of templates for the region are formed by the medial axis method to fill the area with quad element.

Mesh Medial axis method

 

Plastering method

Plastering is the process in which elements are placed starting with the boundaries and advancing towards the centre of the volume. The steps of this method are as follows:

  • A 3D object is taken.
  • One hexahedral element is placed at boundary.
  • Individual hexahedral elements are projected towards the interior of the volume to form hexahedral meshing, row by row and element by element.
  • The process is repeated until mesh generation is completed.

Mesh Plastering method

 

Whisker Weaving Method

Whisker weaving is based on the concept of the spatial twist continuum (STC). The STC is the dual of the hexahedral mesh, represented by an arrangement of intersecting surfaces, which bisect hexahedral elements in each direction. The whisker weaving algorithm can be explained as in the following steps:

  • The first step is to construct the STC or dual of the hex mesh.
  • With a complete STC, the hex elements can then be fitted into the volume using the STC as a guide. The loops can be easily determined from an initial quad mesh of the surface.
  • Hexes are then formed inside the volume, once a valid topological representation of the twist planes is achieved. One hex is formed wherever three twist planes converge.

Mesh Whisker weaving method

 

Paving Method

The paving method has the following steps to generate a quadrilateral mesh:

  • Initially a 2D object is taken.
  • A node is inserted in the boundary and the boundary node is considered as loop.
  • A quadrilateral element is inserted and a row of elements is formed.
  • The row of element is placed around the boundary nodes.
  • Again this same procedure adopt for next rows.
  • Finally quad mesh model is formed.

Mesh Paving method

Mesh Paving method

 

Mapping Mesh Method

The Mapped method for quad mesh generation involves the following steps:

  • A 2D object is taken.
  • The 2D object is split into two parts.
  • Each part is either a simple 2D rectangular or a square object.
  • The simple shape object is unit meshed.
  • The unit meshed simple shape object is mapped in its original form and then joined back to form actual object.

Mapping mesh method

Mapping mesh method

 

Algorithm methods for Triangular and Tetrahedral Mesh

Quadtree Mesh Method

With the quadtree mesh method, square containing the geometric model are recursively subdivided until the desired resolution is reached. The steps for two dimensional quadtree decomposition of a model are as follows:

  • A 2D object is taken.
  • The 2D object is divided into rectangular parts.
  • A Detail tree of divided object is provided.
  • The object is eventually converted into triangle mesh.

 Quadtree mesh method

 

Delaunay Triangulation Method

A Delaunay triangulation for a set P of discrete points in the plane is a triangulation DT such that no points in P are inside the circum-circle of any triangles in DT. The steps of construction Delaunay triangulation are as follows:

  • The first step is to consider some coordinate points or nodes in space.
  • The condition of valid or invalid triangle is tested in every three points which finds some valid triangle to make a triangular element.
  • Finally a triangular mesh model is obtained.

Delaunay Triangulation maximizes the minimum angle of all the angle of triangle and it tends to avoid skinny triangles.

Mesh Delaunay Triangulation method

Mesh Delaunay Triangulation method

 

Advancing Front Method

Another very popular family of triangular and tetrahedral mesh generation algorithms is the advancing front method, or moving front method. The mesh generation process is explained as following steps:

  • A 2D object with a hole is taken.
  • An inner and outer boundary node is inserted. The node spacing is determined by the user.
  • An edge is inserted to connect the nodes.
  • To start the meshing process, an edge AB is selected and a perpendicular is drawn from the midpoint of AB to point C (where C is node spacing determined by the user) in order to make a triangular element.
  • After one element is generated, another edge is selected as AB and a point C is made, but if in case any other node lets point D within the defined radius, then ABC element is cancelled and instead, an element ABD is formed.
  • This process is repeated until mesh is generated.

Mesh Advancing Front method

 

Spatial Decomposition Method

The steps for spatial decomposition method are as follows:

  • Initially a 2D object is taken.
  • The 2D object is divided into minute parts till we get the refined triangular mesh.

Mesh Spatial Decomposition method

 

Sphere Packing Method

The sphere packing method follows the given steps:

  • Before constructing a mesh, the domain is filled with circles.
  • The circles are packed closely together, so that the gaps between them are surrounded by three or four tangent circles.
  • These circles are then used as a framework to construct the mesh, by placing mesh vertices at circle centers, points of tangency, and within each gap while using generated points. Eventually, the triangular mesh is generated.

Mesh Sphere Packing method

Mesh Sphere Packing method

 

 

 

 Source

Singh, Dr. Lokesh, (2015). A Review on Mesh Generation Algorithms. Retrieved from http://www.ijrame.com

Read More

Optimization Problems

When discussing about the mathematics and computer science stream, optimization problems refer to the procedure of finding the most appropriate solution out of all feasible solutions.

The optimization problem can be defined as a computational situation where the objective is to find the best of all possible solutions.

 

Optimization problem

 

Types of Optimization Technique

An essential step to optimization technique is to categorize the optimization model since the algorithms used for solving optimization problems are customized as per the nature of the problem. Let us walk through the various optimization problem types:

Continuous Optimization versus Discrete Optimization
Models with discrete variables are discrete optimization problems, while models with continuous variables are continuous optimization problems. Continuous optimization problems are easier to solve than discrete optimization problems. In a discrete optimization problem, the aim is to look for an object such as an integer, permutation, or graph from a countable set. However, with improvements in algorithms coupled along with advancements in computing technology, there has been an increase in the size and complexity of discrete optimization problems that can be solved efficiently. It is to note that Continuous optimization algorithms are essential in discrete optimization because many discrete optimization algorithms generate a series of continuous sub-problems.

Unconstrained Optimization versus Constrained Optimization
An essential distinction between optimization problems is the situation where problems have constraints on the variables and problems in which there are constraints on the variables. 

Unconstrained optimization problems arise primarily in many practical applications and also in the reformulation of constrained optimization problems. Constrained optimization problems appear from applications where there are explicit constraints on the variables. Constrained optimization problems are further divided according to the nature of the limitations, such as linear, nonlinear, convex, and functional smoothness, such as differentiable or non-differentiable.

None, One, or Many Objectives
Although most optimization problems have a single objective function, there have been peculiar cases when optimization problems have either -  no objective function or multiple objective functions.  Multi-objective optimization problems arise in streams such as engineering, economics, and logistics. Often, problems with multiple objectives are reformulated as single-objective problems.

Deterministic Optimization versus Stochastic Optimization
Deterministic optimization is where the data for the given problem is known accurately. But sometimes, the data cannot be known precisely for a variety of reasons. A simple measurement error can be a reason for that. Another reason is that some data describe information about the future, hence cannot be known with certainty. In optimization under uncertainty, when the uncertainty is incorporated into the model, it is called stochastic optimization.

Optimization problems are classified into two types:

Linear Programming: In linear programming (LP) problem, the objective and all of the constraints are linear functions of the decision variables.

As all linear functions are convex, solving linear programming problems is innately easier to solve than non-linear problems.

Quadratic Programming: In the quadratic programming (QP) problem, the objective is a quadratic function of the decision variables, and the constraints are all linear functions of the variables.

A widely used Quadratic Programming problem is Markowitz mean-variance portfolio optimization problem, where the objective is the portfolio variance, and the linear constraints dictate a lower bound for portfolio return.

 

Linear programming

Linear and Quadratic programming

Optimization is something we all abide by. It is a way of life. We all want to make the most of our available time and make it productive. Optimization finds its use from time usage to solving supply chain problems.

Previously we have learned that optimization refers to the process of finding the best possible solutions out of all feasible solutions. Optimization can be further divided into two categories: Linear programming and Quadratic programming. Let us take a walkthrough.

Linear Programming

Linear programming is a simple technique to find the best outcome or more precisely optimum points from complex relationships depicted through linear relationships. In simple words, the real relationships could be much more complex , but it can be simplified into linear relationships.

Linear programming is a widely used in optimization for several reasons, which can be:

  • In operation research, complex real life problems can be expressed as linear programming problems.
  • Many algorithms in ceratin type of optimization problems operate by solving Linear Programming problems as sub-problems.
  • Many key concepts of optimization theory, such as duality, decomposition, convexity, and convexity generalizations have been inspired by and derived from ideas of Linear programming
  • The early formation of microeconomics witnessed usage of Linear programming and it is still used in departments of planning, pro production, transportation, technology etc.

Quadratic Programming

Quadratic programming is the method of solving a special case of optimization problem, where it optimizes (minimize or maximize) a quadratic objective functions subject to one or more linear constraints. Sometimes, the quadratic programming can be referred as nonlinear programming.

The objective function in QP may carry bilinear or upto second order polynomial terms. The constraints are usually linear and can be both equalities and inequalities.

Quadratic Programming is widely used in optimization. Reasons being:

  • Image and signal processing
  • Optimization of financial portfolios
  • Performing the least-squares method of regression
  • Controlling scheduling in chemical plants
  • Solving more complex non-linear programming problems
  • Usage in operations research and statistical work 
Read More

Path to Product Development

If you are an engineering professional, most likely you are aware of how a physical product comes to life. From the early days of sketching and blueprints, manufacturing of a commodity has come a long way. The modern methodology of creating a product has not only changed drastically, but it has become way more efficient and precise in its approach. Today’s engineer lives and thrives in the world of 3-dimensional models. Whatever masterpiece a designer has in his mind, he has the tools and system to give it life. And it is not just limited to inception of a new idea being turned to a product; it has made the art of reverse engineering being implemented more than ever.

So what are the factors that have revolutionized this craft?

It is the safe to say that with the invention of new tools, techniques and computer, the road to new product development has become more smooth, accurate and flexible. Although a professional can get deep into the subject matter, this article gives a brief overview of the product development from technical perspective.

The footsteps to a new product can be summarized in the following sequence.

 

path to product developmentTo put it in words, here is how the entire sequence goes:

  • Scanning: Whether you have an entirely new idea on your mind, or you want to base your idea on an already existing product; you need a reference. Your reference can be either technical manuals from the manufacturer or the physical product itself. The first step is to scan the product using 3D scanners. 3D scanning technology comes in many shapes and forms. Scanners capture and store the 3D information of the product. The scanned information gets stored in the form of closely spaced data points known as Point Cloud.
  • Point Cloud: A point cloud is a collection of data points defined by a given coordinates system. In a 3D coordinates system, for example, a point cloud may define the shape of some real or created physical system.
  • Mesh: Point clouds are used to create 3D meshes. A mesh is a network that constitutes of cells and points. Mesh generation involves point clouds to be connected to each other by the virtue of vertices, edges and faces that meet at shared edges. There are specific softwares for carrying of meshing function.
  • 3D Model: Once the meshed part is generated, it goes through required software applications to be transferred to Computer Aided Design (CAD) tools to get transformed into a proper 3D CAD model. 3D model is the stage where whole sorts of applications such as sewing, stitching, etc, are implemented to create a prototype.
  • Testing: A prototype goes through numerous tests in this phase, to check for limitations and possible calibrations if necessary. This is done to determine the optimum stage where the prototype can be turned to a product.
  • Product: This is where the entire process comes to an end. Once a prototype is evaluated and finalized, it is sent for production in order to introduce it to the market.

 This introductory part gives you a summary of product development and the related technical terms. In the next chapters, we will dive deep and go through all the mentioned stages, one by one.

Read More

Point Cloud Operations

No output is always perfect no matter how much the technology has evolved. Even though point cloud generation has eased up manufacturing process, it comes with its own anomaly. Generally, a point cloud data is accompanied by Noises and Outliers.

Noises or Noisy data means the data information is contaminated by unwanted information; such unwanted information contributes to the impurity of the data while the underlying information still dominates. A noisy point cloud data can be filtered and the noise can be absolutely discarded to produce a much refined result.

If we carefully examine the image below, it illustrates a point cloud data with noises. The surface area is usually filled with extra features which can be eliminated.

 

Point Cloud Before noise redeuction

 

After carrying out Noise Reduction process, the image below illustrates the outcome, which a lot smoother data without any unwanted elements. There are many algorithms and processes for noise reduction.

 

Point Cloud After noise reduction

 

Outlier, on the contrary, is a type of data which is not totally meaningless, but might turn out to be of interest. Outlier is a data value that differs considerably from the main set of data. It is mostly different from the existing group. Unlike noises, outliers are not removed outright but rather, it is put under analysis sometimes.

The images below clearly portray what outliers are and how the point cloud data looks like once the outliers are removed.

 

Point Cloud With outliers

 

Point Cloud Without outliers

 

Point Cloud Decimation

We have learned how a point cloud data obtained comes with noise and outliers and the methods to reduce them to make the data more executable for meshing. Point cloud data undergoes several operations to treat the anomalies existing within. Two of the commonly used operations are Point Cloud Decimation and Point Cloud Registration.

A point cloud data consists of millions of small points, sometimes even more than what is necessary. Decimation is the process of discarding points from the data to improve performance and reduce usage of disk. Decimate point cloud command reduces the size of point clouds.

The following example shows how a point cloud underwent decimation to reduce the excess points.

Point Cloud Before decimation

 

Point Cloud After decimation

 

Point Cloud Registration

Scanning a commodity is not a one step process. A lot of time, scanning needs to be done separately from different angles to get views. Each of the acquired data view is called a dataset. Every dataset obtained from different views needs to be aligned together into a single point cloud data model, so that subsequent processing steps can be applied. The process of aligning various 3D point cloud data views into a complete point cloud model is known as registration. The purpose is to find the relative positions and orientations of the separately acquired views, such that the intersecting regions between them overlap perfectly.

Take a look at the example given below. The car door data sets have been merged to get a complete model.

 

Point Cloud before registration

 

Point Cloud After registration

 

 

 

Read More

Point Clouds

Whether working on a renovation project or making an information data about an as-built situation, it is understandable that the amount of time and energy spent on analysis of the object/project in hand can be quite debilitating. Technical literatures over the years, has come up with several methods to make a precise approach. But inarguably, the most prominent method is the application of Point Clouds.

3D scanners gather point measurements from real-world objects or photos for a point cloud that can be translated to a 3D mesh or CAD model.

But what is a Point Cloud?

A common definition of point clouds would be — A point cloud is a collection of data points defined by a given coordinates system. In a 3D coordinates system, for example, a point cloud may define the shape of some real or created physical system.

Point clouds are used to create 3D meshes and other models used in 3D modeling for various fields including medical imaging, architecture, 3D printing, manufacturing, 3D gaming and various virtual reality (VR) applications. A point is identified by three coordinates that, correlate to a precise point in space relative to a point of origin, when taken together.
Point CloudThere are numerous ways of scanning an object or an area, with the help of laser scanners which vary based on project requirement. However, to give a generic overview of point cloud generation process, let us go through the following steps:

  1. The generation of a point cloud, and thus the visualization of the data points, is an essential step in the creation of a 3D scan. Hence, 3D laser scanners are the tools for the task. While taking a scan, the laser scanner records a huge number of data points returned from the surfaces in the area being scanned.
  1. Import the point cloud that the scanner creates into the point cloud modeling software. The software enables visualizing and modeling point cloud, which transforms it into a pixelated, digital version of the project. 
  1. Export the point cloud from the software and import it into the CAD/BIM system, where the data points can converted to 3D objects.
Different 3D point cloud file formats

Scanning a space or an object and bringing it into designated software lets us to further manipulate the scans, stitch them together which can be exported to be converted into a 3D model. Now there are numerous file formats for 3D modeling. Different scanners yield raw data in different formats. One needs different processing software for such files and each & every software has its own exporting capabilities. Most software systems are designed to receive large number of file formats and have flexible export options. This section will walk you through some known and commonly used file formats. Securing the data in these common formats enables the usage of different software for processing without having to approach a third party converter.

Common point cloud file formats

OBJ: It is a simple data format that only represents 3D geometry, color and texture. And this format has been adopted by a wide range of 3D graphics applications. It is commonly ASCII (American Standard Code for Information Interchange).

PLY: The full form of PLY is the polygon file format. PLY was built to store 3D data. It uses lists of nominally flat polygons to represent objects. The aim is to store a greater number of physical elements. This makes the file format capable of representing transparency, color, texture, coordinates and data confidence values. It is found in ASCII and binary versions.

PTS, PTX & XYZ: These three formats are quite common and are compatible with most BIM software. It conveys data in lines of text. They can be easily converted and manipulated.

PCG, RCS & RCP: These three formats were developed by Autodesk to specifically meet the demands of their software suite. RCS and RCP are relatively newer.

E57: E57 is a compact and widely used vendor-neutral file format and it can also be used to store images and data produced by laser scanners and other 3D imaging systems.

Challenges with point cloud data

The laser scanning procedure has catapulted the technology of product design to new heights. 3D data capturing system has come a long way and we can see where it’s headed. As more and more professionals and end users are using new devices, the scanner market is rising in a quick pace. But along with a positive market change, handling and controlling the data available becomes a key issue.

Five key challenges professionals working with point cloud face are:

  • Data Format: New devices out there in the market yields back data in a new form. Often, one needs to bring together data in different formats from different devices against a compatible software tool. This presents a not-so-easy situation
  • Data Size: With the advent of new devices, scanning has become cheaper with greater outputs. It is possible to scan huge assets from a single scan. This has resulted in the creation of tens of thousands of data points. A huge data of points can be challenging to handle and share between project partners.
  • Inter-operability: Integration between new technologies with the existing software can be quite arduous. Although, with careful investment of time and money, the goal can be achieved nonetheless.
  • Access: All the professionals involved in the entire lifecycle of a product can benefit from having access to point cloud data. But multiple datasets in multiple formats usually makes it more of a hassle.
  • Ownership: Who owns point cloud data? In the past, EPCs and the contractors who capture the data become custodians of the information.
  • Rendering: Different formats can result in rendering problems for point clouds.
Read More

Page 1 of 2