Wednesday, January 23, 2013

The Geometry Kernel and What it Means to Product Development

There certainly has been much talk about the CAD geometry kernel in the last few years. Much of this talk comes from the rumors around Dassault Systems changing the SolidWorks kernel. I think it is clear now that Dassault has no plans to change the SolidWorks kernel, but rather is developing a new product on the Dassault kernel.

So what's the big deal about the geometry kernel anyway? Why do we even care about what kernel is under the covers of our favorite CAD tool? Should we care about it? Do you know how the geometry kernel can impact your ability to be effective in the design of your products? Strangely, the answer to these questions depends on many factors.

I know I am going to over simplify this, but here it goes anyway. In its purest form the CAD kernel is a geometry engine. It takes instructions, processes the instructions, and delivers results. The results are typically in the form of geometry. Every geometry kernel on the market can be unique in many different ways.

  • The instructions or "functions" that the kernel will accept are very specific to the kernel. Also each function has a specific set of operatives or parameters that go with it. These of course have to be presented to the kernel in the appropriate format. There are no standards for kernel functions/instructions.
  • Kernels can also define geometry in many different forms. Some kernels understand analytic geometry, some understand B-Splines, some understand NURBS, and so on. Some might understand all of the previous and know how to merge the different definitions. There are no standards as to how kernels mix and match these definitions.
  • Geometry kernels can also work at different geometry resolutions (accuracy). Some kernels are more stable at very low resolution, while others work more effective at a higher resolution. Most kernels provide adjustable geometry resolution and the CAD system typically controls this, either through a setting or automatically. There is no standard geometry resolution for CAD geometry.
  • Geometry kernels can also calculate the geometric results differently. Perhaps the easiest way to see this is with the corner "Round" or "Blend" function. Every kernel out there will calculate the vertex region differently - and it is very visible. There are no standards as to how a kernel calculates this type of geometry. Here are some examples from some of the more popular CAD tools on the market. Pay close attention to the differences in topology in these examples. Also note in these examples how the planar faces are effected differently. I can usually tell which CAD system was used to create a 3D model just by looking at the rounded corners. There are no standards for these types of calculations.

same geometry with same radius in all 4 cases
all blends done in one function/feature

Those are some of the big differences. There are many more, especially in the area of freeform surface definition.

So, again, what does all this kernel stuff mean to the product development process?

When we are working with CAD we are typically creating two layers of information. The TOP layer is the feature definition including any import or non-ordered geometry, sketches, parameters and other modeling functions. This is the history tree, or some may call it the "design intent". On the BOTTOM layer is the resulting geometry. (Of course with direct modeling all you have is the bottom layer.) When moving CAD data from one kernel to another, both layers need to be considered.

Let’s consider the TOP layer first; it is actually the most complex. (And for you direct modeling people you don't need to be concerned with this layer. You can skip to the BOTTOM layer covered below). A parametric history-based CAD system is basically recording every function it sends to the kernel into the history tree. These kernel functions and their parameters are bundled into the "parametric feature" definition. For example; a sketch with an extrusion distance creates a primitive. Constraints control the size and position of the new primitive. Then a Boolean function is added to specify whether the primitive will be added or removed from the parent geometry. The Boolean function with the primitive and related parameters are passed to the kernel and the resulting geometry is processed and revealed. This kernel function is processed every time this "feature" is regenerated or reprocessed. The kernel function along with its required parameters is very specific to the kernel as discussed above. It is highly likely that another kernel will not understand this very specific function, and even if it did the geometrical results could be very different.

Moving this TOP layer from one kernel to another kernel is very much like taking a FORTRAN program and trying to compile it with a C compiler. It won't work. The functions and related parameters are just not compatible. As such these functions that are recorded in the history tree have to be translated to work on a different kernel. The CAD industry has made several attempts to make translators that will translate this TOP layer from one kernel to another - but we have had very limited success. By the way, this is also one of the reasons for slow progress on geometry kernels - we are too locked into this TOP layer. If we make too many changes to the kernel we may impact upward 
compatibility with previous version "history trees", i.e. “kernel functions”. There have been many examples of this problem over the history of CAD.

Moving the TOP layer from kernel to kernel may provide the highest risk to product development. This translation has to work perfect otherwise history/design intent can be lost. By the way, if you do get a complete and robust translation of this TOP layer from one kernel format to another, you don't need to be concerned with the BOTTOM layer. The geometry will be recreated for you when the translated TOP layer is processed by the receiving kernel.

Now let’s consider the BOTTOM layer, i.e. the geometry (and for you parametric modeling users that were able to get an accurate translation of the TOP layer, you can skip this section - unless your TOP layer has a lot of import or non-ordered geometry features in it). As I mentioned earlier, geometry kernels can create geometry in a variety of different forms and at a variety of different resolutions (accuracy). Although we have industry exchange standards for geometry (IGES, STEP), as mentioned above we don't have standards for geometry accuracy or definition. Translation of geometry from one kernel to another is not flawless. You most likely have experienced this imperfection if you have ever translated geometry through the STEP or IGES formats.

If you have experienced errors or gaps in translated geometry it could be a result of, for example, translating an analytic surface to a NURBS surface, or vice-versa. Or it could be a geometry accuracy problem. In some cases the translation issues may be related to the IGES or STEP processors within the CAD tool, but it’s also possible that the issues are due to big differences in the sending and receiving geometry kernels. We all likely know how poor geometry translations can impact the product development process. When moving CAD geometry from one kernel to another, you can expect some of this. Fortunately most modern CAD systems have tools to quickly resolve these geometry imperfections - don't they?

So how might a kernel change impact your product development process? Well, it depends. Will you be transferring the TOP layer or the BOTTOM layer with the kernel change? I hope you can begin to see the pro's and con's of each.Here are few final questions to ponder:
  • What do you think happens if you use one geometry kernel for concept design, and another for detail design? Can you "round trip" without loosing data or degenerating the model?
  • How robust is your TOP layer? Do you maintain strict modeling standards?
  • What geometry accuracy does your geometry kernel run at? Does it make high quality geometry that other kernels can consume without error?
  • Are your drawings associated to the TOP layer or the BOTTOM layer?
  • How is "model-based" impacted when multiple geometry kernels are used in the product development process?
  • What downstream functions are driven by the TOP layer versus the BOTTOM layer?

Tuesday, January 15, 2013

Model-Based Engineering – Are We There Yet? (Part II)

  Part II    (Part I)

I've had many opportunities to conduct product development process assessments at some very large companies. In one such case I visited 4 or 5 different facilities scattered around the US. Interviews consisted of about 70 or 80 different people; VP's, Directors, Managers, and key contributors to the process. When I conduct these assessments I pay close attention to the input and output of each stage in the process. The output from one stage is of course the input to the next stage – in some form or another. Or, of course, that’s the way it should work. Then I like to look at the individual activities, methods and tools used to support that particular stage of the process, also noting where and how data is managed as it flows through the process stage.

During this particular assessment we identified over one hundred different software tools that were being used in the development and manufacture of their products. This includes authoring tools, analyzing tools, calculation tools, optimizing tools, documenting tools, managing tools, rendering tools, publication tools, presentation tools, communication tools, and so on. What is interesting to consider is that most of the tools that were actually used to create product specific data were creating a unique file type. And in many cases these files (documents) were not directly consumable by any of the other tools that may be on the “input” or “receiving” end of the data. Many of these documents were disconnected islands that required manual updating. My experience tells me that this situation is not too uncommon.

Of course multiple documents are a reality in any product development environment. A universal PLM/PDM environment will be critical in bringing these many documents together into one master. But there is also value in reducing the number of "documents". A tools consolidation initiative may be an important step towards "Model-Based". Considering the number of different tools used in the development of the product, how many disconnects could there be between actual product requirements and manufacturing. Disconnects always lead to standalone pieces of data, manual data entry and duplication of effort. Does your environment support "Model-Based"?

Here’s another interesting example. I've worked with many companies that can rarely ever generate a model based top level engineering BOM of their product. The BOMs are typically manually created. Keeping the manufacturing BOM up to date and accurate is likely tedious with much manual interaction. Unfortunately this situation is not too uncommon either. Are your BOM’s "Model-Based"?

There are many reasons that companies may not be able to generate a model based top level e-BOM. The assemblies could be so large that they simply can't load the entire assembly and scan it. But there can be other reasons. I see many examples of "overloaded" assemblies. Assemblies that are packed full of much information and detail that adds no value to the design, manufacture, or service of the actual product. Here is an example of what I mean by "overloaded".  Perhaps you are in the industrial equipment business. Your products are likely packed full of purchased components. In the interest of completeness and quality you request detailed 3D models from your suppliers and integrate them as-in into your design. If you are not careful in this integration process you can quickly and unnecessarily double and even triple the complexity and size of your assemblies.

Getting a model based top level e-BOM could be a fundamental step in making progress towards "Model-Based". As such, one of the first steps towards "Model-Based" could be to optimize the master through some best practices around data reduction and data integration. In the end, “Model-Based” demands a rich master. If the master is already too full and unnecessarily complex - you have a problem. Is your master optimized for "Model-Based"?

One last example. I've experienced a few companies where the end product is a highly stylized, ergonomic, creative, and attractive product. The product is more “touchy-feely” than highly engineered. In many of these cases the majority of all intellectual property is defined by the “Artist”, i.e. the Industrial Designer. It’s the job of the “Detail” Designer to make the product manufacture-able. In this situation, the design master is made up of the early concepts and aesthetic requirements. If a design change happens it almost always goes back to the artist. The change then ripples back through all the downstream stages. Can you guess how many times the detail designer might have to create and/or re-create the as-manufactured CAD model? Are your detailed 3D CAD models “Model-Based”?

I know I have some strange views on what “Model-Based” could be. Perhaps I'm completely off-base, but I hope I have stretched your thoughts on the topic just a bit. Let me know what you think. Feel free to add your comments below.