|
[Sponsors] | |||||
They say the only constant in life is change and that’s as true for blogs as anything else. After almost a dozen years blogging here on WordPress.com as Another Fine Mesh, it’s time to move to a new home, the … Continue reading
The post Farewell, Another Fine Mesh. Hello, Cadence CFD Blog. first appeared on Another Fine Mesh.
Welcome to the 500th edition of This Week in CFD on the Another Fine Mesh blog. Over 12 years ago we decided to start blogging to connect with CFDers across teh interwebs. “Out-teach the competition” was the mantra. Almost immediately … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Automated design optimization is a key technology in the pursuit of more efficient engineering design. It supports the design engineer in finding better designs faster. A computerized approach that systematically searches the design space and provides feedback on many more … Continue reading
The post Create Better Designs Faster with Data Analysis for CFD – A Webinar on March 28th first appeared on Another Fine Mesh.
It’s nice to see a healthy set of events in the CFD news this week and I’d be remiss if I didn’t encourage you to register for CadenceCONNECT CFD on 19 April. And I don’t even mention the International Meshing … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Some very cool applications of CFD (like the one shown here) dominate this week’s CFD news including asteroid impacts, fish, and a mesh of a mesh. For those of you with access, NAFEM’s article 100 Years of CFD is worth … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
This week’s aggregation of CFD bookmarks from around the internet clearly exhibits the quote attributed to Mark Twain, “I didn’t have time to write a short letter, so I wrote a long one instead.” Which makes no sense in this … Continue reading
The post This Week in CFD first appeared on Another Fine Mesh.
Engineers have been adapting biological materials into robotics in recent years. One of the latest versions of this trend is “necroprinting,” in which researchers built a microscale 3D printer around a mosquito’s proboscis. Made to pierce thick skin to reach blood, the mosquito proboscis offered the kind of size, geometry, and stiffness needed for small-scale printing. The team found that their necroprinter performed well at the ~20 micron scale, with the mosquito-based nozzle costing only a fraction of what a conventional human-made nozzle would. (Image credit: NIAID; research credit: J. Puma et al.; via Ars Technica)
When an object like a sphere enters the water, it drags air into the water behind it, creating a cavity. Depending on the sphere’s impact speed, the cavity might close first under the water, forming a deep seal, or at the surface with a surface seal. But, as this video points out, water often isn’t still. Here, they explore how the sphere’s entry changes when there are ripples on the water surface. (Video and image credit: M. Ibrahim et al.; via GFM)
Spheres of a Volvox colonial algae glow green inside a droplet in this award-winning microphotograph by Jan Rosenboom. Pinned on an inclined surface, the droplet is frozen in a balance between gravity and surface tension that keeps its shape–and its contact angles–asymmetric. Droplets will also take on a shape similar to this when air is blowing past them. (Image credit: J. Rosenboom; via Ars Technica)
Venus is a world of extremes. A full rotation of the world takes 243 Earth days, but winds race around the planet at a speed that makes a Category 5 hurricane look sedate. Just what drives these winds has been an ongoing question for planetary scientists. A recent study suggests that tides are a major contributor to this superrotation.
Unlike Earth’s tides, Venus’s are not gravitational in origin. Instead, Venusian tides are thermal, driven by heating in the sunward side of the atmosphere. This creates a diurnal tide, which cycles once per Venusian day and pumps momentum toward the tops of Venus’s clouds. The new analysis–rooted in both observations and numerical simulation–finds that diurnal tides are the primary driver behind the planet’s incredibly fast winds. (Image credit: NASA/JPL-Caltech; research credit: D. Lai et al.; via Eos)

Large-scale computational fluid dynamics simulations face many challenges. Among them is the need to capture both large physical scales–like those of Earth’s atmospheric boundary layer–and small scales–like those of tiny eddies moving around a wind-turbine blade. Capturing all of these scales for a problem like four wind turbines in a wind farm requires using the full computing power of every processor in a large supercomputer. That’s the level of power behind the simulation visualized in this video. The results, however, are stunning. (Video and image credit: M. da Frahan et al.)
Inside a fusion reactor, magnetically-contained plasma gets heated to more than one hundred million degrees. That heat, researchers observed, spreads much faster than originally predicted. Now a team from Japan has measurements showing how turbulence manages this feat.
The researchers show that the multiscale nature of turbulence allows it to transport heat in two ways. The first is familiar: acting locally, turbulence spreads heat little by little as small eddies mix and pass the heat along. But turbulence can also be nonlocal, they show, able to connect physically distant parts of a flow more rapidly than expected. This happens through turbulence’s larger scales, which can rapidly carry heated plasma from one side of the vessel to another.
The researchers illustrate the two roles of turbulence through a metaphor of American football (can you believe it?). In their metaphor, the quarterback acts as turbulence and the ball represents heat. The quarterback can pass the ball to reach distant parts of the field quickly — just as nonlocal turbulence does–or they can hand off the ball to a running back, who carries the ball down the field more slowly, through local interactions with other nearby players. (Image credit: National Institute for Fusion Science; research credit: N. Kenmochi et al., via Gizmodo and EurekAlert)
heaterWallYou can run the case now.
{
type groovyBC;
value uniform 300; // Reference temperature value. If this is not provided, the solver assumes wall temperature as 0 by default
valueExpression "0"; //Can be used when you want a Dirichlet BC as an expression e.g. wall temperature as some function of position or even time.
gradientExpression "gradientT"; //Neumann BC - For Heat Flux, we actually calculate the temperature gradient across the wall - expression is given below
fractionExpression "0"; //0 for Neumann BC; 1 for Dirichlet BC
variables
(
"Cp0=4182.4;" //Sp. Heat Capacity [J/kg-K]
"rho0=984.6;" //Density [kg/m3]
"Power=552;" //Input Power [W] - This is optional. If you know heat flux, you can enter it in the heatFlux below.
"Area=0.062114;" //Heater Surface Area [W]
"heatFlux=Power/Area;" //Heat Flux [W/m2]
"kappaEff=alphaEff*Cp0*rho0;" //Effective Thermal Conductivity [W/m-K]
"gradientT=heatFlux/kappaEff;" // Temperature Gradient which will be set as gradient boundary in the groovyBC
//alphaEff - Effective Thermal Diffusivity will be calculated by the solver as below
// alphaEff = alpha_mol + alphat
// alpha_mol: Molecular Thermal Diffusivity [m2/s2] - thermophysical property of the fluid that is provided in the transportProperties dictionary
// alphat: Turbulent Thermal Diffusivity [m2/s2] - Calculated by the solver in your time directory
//If the flow is laminar, you can substitute kappaEff with kappa, actual thermal conductivity of the fluid.
);
evaluateDuringConstruction false; // false flag will allow the ussage of external parameters like alphaEff which is not defined here.
}
sed -i s/groovyBC/fixedValue/g */TThis command searches in all the files named T for "groovyBC" and replaces it with "fixedValue".
|
Hi sakro,
Sadly my experience in this subject is very limited, but here are a few threads that might guide you in the right direction:
Best regards and good luck! Bruno |
Figure 1: Hexahedral Meshing of Vortex Generators in Wind Turbines.
1588 words / 8 min read
Vortex generators can boost turbine performance dramatically—but only if your mesh captures their near-wall physics with accuracy. In this guide, learn a proven multi-block workflow that keeps y⁺stable, reduces skewness, and delivers high-fidelity CFD results you can trust.
Wind turbine engineers appreciate the strong lift boost that vortex generators (VGs) can deliver, especially on blades operating in rough or degraded surface conditions. But if you ask a CFD engineer about them, you’ll likely hear a familiar line: “Meshing the VG is harder than simulating the entire blade.” And that sentiment is justified. A VG is tiny, sharp-edged, and positioned in the most sensitive portion of the boundary layer—right where the flow is thinnest, most fragile, and least forgiving. Even minor mesh distortions here can cause numerical instability, unrealistic gradients, or complete loss of physical correctness.
Recent research findings show just how impactful VGs can be when modeled correctly. Under rough-wall operating conditions, controlled experiments (Marine Sci and Engg Journal) reported that VGs reduce flow separation by nearly one-third and increase power output by close to 48%. These performance gains are significant, especially for blades already experiencing surface erosion or contamination. But these benefits do not appear automatically in CFD. They only emerge when the mesh precisely represents near-wall momentum exchange and resolves the VG geometry without compromising y⁺ or orthogonality.
This is where structured multi-block meshing becomes essential. Unlike unstructured approaches, a structured topology allows full control over cell alignment, growth rates, and boundary-layer thickness. It enables smooth extrusion around VG edges, reduces skewness, and retains the numerical stability required to reproduce real-world VG effects. In short: to model VGs correctly, the mesh must be exceptionally clean—and structured blocks give you the tools to achieve that.

Vortex generators (VGs) play a significant role in improving the aerodynamic performance of wind turbine blades, especially as they age and accumulate surface roughness. Controlled studies (Marine Sci and Engg Journal) show that VGs help maintain attached flow by energizing the boundary layer, reducing separation, and restoring lift in conditions where a bare blade would begin to stall.
Full-scale assessments of multi-megawatt turbines (Renewable Energy Journal) reinforce this evidence. Turbines equipped with VGs consistently demonstrate delayed stall onset, smoother surface flow, and measurable increases in power output. These improvements are not marginal—they can shift the power curve upward in operating regions that normally suffer from aerodynamic losses.
Research investigating different VG shapes and configurations (Energies Journal) shows that even small adjustments in height, angle, or spacing can greatly affect boundary-layer behavior. This sensitivity highlights just how dependent VG performance is on exact geometry and flow resolution.
It is for this reason, that the appropriate resolution of the flow field around VGs is a must when we do CFD simulations. If the VG is not captured correctly, the CFD solution may appear well behaved numerically, yet fail to predict the turbine’s actual behavior. This is why accurate meshing matters: the VG’s aerodynamic benefits exist only when near-wall physics are resolved with high fidelity.
Meshing vortex generators (VGs) is challenging because they combine small geometric features with extremely sensitive flow physics. A VG may only be a few millimeters tall, but it sits inside the tightest region of the boundary layer—where gradients in velocity, shear stress, and turbulence are the sharpest. Any weakness in mesh quality here can permeate through the entire simulation. Studies examining transitional boundary-layer behavior (Ocean Engg Journal) show that even slight distortions near the VG’s leading edge can alter shear distribution, weaken momentum transfer, and shift separation points by a significant margin.
The difficulty isn’t just the small size of the geometry. It’s the fact that the flow around a VG changes direction rapidly, creating strong local vortices that depend on precise near-wall resolution. If the first-layer spacing, orthogonality, or cell skewness is off—even by a little—the numerical model may fail to form the correct vortical structures.
Stall-prevention studies (Springer Publication) emphasize that errors near the VG base can distort the entire downstream flow field. Unstructured meshes often struggle to maintain consistent y⁺ and clean cell alignment around these sharp edges.
This is why VG meshing is often considered more difficult than the blade itself: the geometry is small, the physics are intense, and the simulation is unforgiving. Structured meshing workflows provide the stability needed to accurately resolve these complexities.

Structured multi-block meshing is particularly effective for vortex-generator (VG) simulations because it gives engineers exact control over the boundary-layer topology. Unlike unstructured meshes—where cell shapes and alignment can vary unpredictably—structured blocks enforce smooth, continuous grid lines that follow the blade surface and VG geometry exactly. This alignment approach is critical in regions where the fluid flow is highly sensitive. Studies on wind-turbine blade meshing (WES Journal) show that structured approaches consistently produce better orthogonality and more stable y⁺ distributions.
Multi-block layouts also allow the VG to be isolated inside its own dedicated block. By controlling topology at this local scale, the mesh can resolve the sharp VG edges cleanly without distorting neighboring cells. Research on aerodynamic grid generators (WES Journal) demonstrates how dividing complex geometric structures into smaller, well-aligned blocks reduces skewness and improves solver behavior.
Beyond geometric accuracy, structured blocks make boundary-layer extrusion more reliable. Cell growth can be set smoothly and predictably, producing cleaner layers even around tight curvature. The result is a mesh that captures VG-induced vortices more faithfully and maintains numerical stability, making structured multi-block methods the preferred choice for high-fidelity VG simulations.

Meshing vortex generators (VGs) becomes far easier when the process is broken into a clear, repeatable sequence. The goal is simple: preserve clean boundary-layer topology while resolving the VG’s sharp edges and strong local gradients. The steps below follow practices validated in VG layout research (Applies Sciences Journal) and turbine blade meshing studies (WES Journal).
1. Start with Clean, Split Geometry: Begin by ensuring the blade surface is watertight and that the VG is correctly merged and split along its base. Clean geometry reduces deformation during block creation and prevents layer collapse. VG performance studies show that small geometric inaccuracies can shift separation points significantly, reinforcing the need for accurate modeling.
2. Block the Blade Surface First: Create structured blocks along the blade in both the chordwise and spanwise directions. Establishing this base topology first provides a stable framework for adding VG sub-blocks later and ensures that surface curvature is captured smoothly.
3. Add a Local Sub-Block for Each VG: Each VG should live inside its own dedicated block. This isolates the sharp edges and allows the mesh to resolve the VG without distorting nearby cells. Multi-block aerodynamic meshing research (WES Journal) demonstrates how local blocks improve orthogonality and keep gradients physically correct.
Extrude the Boundary Layer Smoothly: Use hyperbolic or structured extrusion to generate layers around both the blade and the VG. Studies on boundary-layer extrusion (Applied Sciences Journal) show that smooth growth rates reduce skewness and improve solver stability.
5. Smooth Transitions and Check y⁺ Everywhere: Ensure transitions between blocks are gradual to prevent sudden jumps in cell size. Consistent y⁺—on the blade and the VG—is vital for accurate vortex formation. Mesh-sensitivity studies (WES Journal) confirm that stable near-wall spacing directly impacts power-prediction accuracy.

GridPro simplifies the challenge of meshing vortex generators (VGs) by giving engineers fine control over structured multi-block layouts. Instead of fighting distorted or irregular cells, GridPro lets you assign clean blocks directly around the blade and each VG, ensuring that the topology stays aligned with the geometry. Research on multi-block aerodynamic grid generation have shown that, subdividing complex surfaces into well-organized blocks significantly reduces skewness and improves CFD stability.
GridPro’s automated smoothing and high-quality extrusion tools help maintain orthogonality and consistent layer growth—two requirements for capturing VG-induced vortices accurately.
Engineers frequently note that GridPro’s structured approach “removes the guesswork” from meshing. By keeping the topology clean and predictable, it becomes much easier to hit target y⁺ values, avoid mesh collapse near VG edges, and achieve stable, high-fidelity simulations.

A high-quality vortex-generator (VG) mesh maintains smooth, orthogonal layers across both the blade surface and the VG itself. Refinement must be tight around VG edges, with gradual layer expansion to preserve stability in the boundary layer. Studies on turbine mesh sensitivity (WES Journal) show that consistent y⁺, low skewness, and controlled growth rates are essential for forming accurate vortices.
A good VG mesh also avoids abrupt transitions between blocks, preventing distortion and numerical noise. When these elements are in place, the CFD solution captures VG-driven flow behavior reliably and matches real-world turbine performance.

Meshing vortex generators (VGs) accurately is vital to capturing their real aerodynamic benefits on wind turbine blades. Because VGs sit in the most sensitive region of the boundary layer, even small flaws in mesh quality can distort separation behavior, vortex formation, and overall turbine performance.
A structured multi-block approach provides the control needed to maintain clean topology, smooth extrusion, and stable y⁺ values around these sharp, tiny features. When each VG is placed in its own well-aligned block and the surrounding layers are carefully managed, the CFD solution becomes far more reliable. In short, mastering the mesh is the key to modeling VG physics with confidence.
Interested in Using GridPro for Your Wind Turbine Meshing Projects?
GridPro’s advanced multi-block structured meshing tools deliver the precision, efficiency, and scalability needed for aerodynamic simulations of turbine blades and rotors.
Click Here to Learn More or Request a Demo.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post The Best Way to Mesh Vortex Generators (VGs) for Accurate Wind Turbine CFD appeared first on GridPro Blog.
Figure 1: Structured mesh for wind turbine blades CFD simulation.
Word count: 1724 / 9 min read
Behind every accurate wind-turbine CFD simulation lies one secret ingredient — the mesh. It shapes how air, pressure, and turbulence are understood by your solver. From blade-root complexity to tip vortices, structured multi-block meshes reveal the physics others miss — proving that in CFD, mesh defines the math — and the outcome.
Ever wondered why some CFD simulations for wind turbine blades converge beautifully, while others drag on for days and still struggle to stabilise? The difference often lies not in the solver or turbulence model — but in the mesh.
In computational fluid dynamics, the mesh is more than a discretisation step; it’s the invisible architecture that determines how faithfully your solver captures aerodynamic reality. For wind turbines, where efficiency, torque, and wake behaviour drive performance, the blade mesh becomes the foundation on which every result depends.
This post explores how CFD meshes are built for modern wind turbine blades — why structured, multi-block grids continue to outperform unstructured ones — and how multi-block structured meshing workflows help engineers balance accuracy, efficiency, and scalability.

In wind turbine simulations, the blade mesh is the foundation of aerodynamic accuracy. Every physical process — lift, drag, torque, and wake formation — begins at the blade. If that geometry is poorly represented, even the most advanced solver cannot recover the missing physics.
A turbine blade presents meshing challenges unlike any other aerodynamic surface. It twists, tapers, and curves along its span, with rapidly changing pressure gradients from root to tip. A good mesh must capture these variations smoothly, maintaining orthogonality and controlled growth rates. Misaligned or stretched cells distort gradients, leading to lift and pressure errors that cascade downstream.
Research studies have demonstrated that multi-block structured meshes significantly improve residual convergence and pressure recovery around wind turbine blades. By aligning cells with the flow and maintaining smooth transitions across blocks, structured meshes reduce numerical diffusion and deliver more stable results. In other words, the mesh doesn’t just describe the geometry — it defines how the solver “sees” the physics.
The aerodynamic performance of a turbine originates entirely at the blade surface. Lift arises from the pressure difference between suction and pressure sides, while viscous drag builds within the boundary layer. The resulting wake pattern determines how efficiently the turbine extracts energy and how it affects downstream rotors.
To predict this behavior accurately, the mesh must align with chordwise and spanwise flow directions, allowing the solver to resolve gradients along the same paths as the air. Misaligned or skewed elements introduce cross-flow diffusion that can shift lift and drag predictions by several percent — a critical error in design optimization. Structured, flow-aligned grids eliminate this uncertainty and maintain stable wake resolution across operating conditions.
Solver capability matters, but surface resolution often matters more. The solver can only calculate gradients based on the mesh it’s given. Capturing the thin boundary layer requires precise control of wall spacing to maintain y⁺ < 1 for RANS or LES simulations. Structured or multi-block grids make that control practical and repeatable.
A research study has found that refining near-wall spacing improved lift prediction by over 7%, whereas changing solvers affected accuracy by less than 2%. The lesson is clear: in wind turbine CFD, the mesh doesn’t support the solver — it empowers it.

In CFD for wind turbines, the debate between structured and unstructured meshing is less about preference and more about purpose. Both can model complex geometric forms, but their differences in accuracy, computational effectiveness, and ease of generation determine which one wins in practice.
Structured meshes are like an ordered lattice — every cell has a predictable neighbor, creating a logical grid. Unstructured meshes, by contrast, resemble a mosaic — flexible and automatic but less uniform. Each has strengths, but for wind turbine blades, where flow alignment and boundary-layer accuracy dominate, structure still leads.
Structured grids shine when exactness and stability are non-negotiable. Their ordered connectivity lets solvers compute gradients efficiently, improving convergence and reducing RAM overhead. For turbine blades, structured meshes can align with the chordwise and spanwise flow directions, reducing numerical diffusion and ensuring that pressure and velocity gradients remain physically consistent.
According to meshing study, structured meshes provide superior wake resolution and boundary-layer capture compared to unstructured alternatives. They also produce cleaner surface pressure distributions, leading to more accurate lift and torque predictions — essential for blade optimization and aeroelastic analysis.
In rotating reference frames, structured meshes maintain better continuity across sliding interfaces, keeping wake behavior stable during transient runs. For these reasons, structured grids are still the standard for final design validation in wind turbine aerodynamics.
Unstructured meshing earns its place in early-stage design. It handles complex geometry automatically and dramatically reduces setup time. Engineers frequently depend on it for parametric sweeps or concept evaluations where turnaround matters more than precision.
However, studies have shown that unstructured meshes typically require two to three times more elements to achieve the same near-wall accuracy as structured ones. They’re fast to generate but computationally costlier to run.
The choice between structured and unstructured meshing boils down to balancing time and truth. Structured meshes take longer to prepare but pay off with faster solvers and cleaner results. Unstructured meshes are convenient but often over-resolve regions unnecessarily.
In production CFD workflows, engineers increasingly adopt multi-block structured approaches, capturing the accuracy of structured grids without sacrificing usability. The result: faster iteration, stable convergence, and data you can trust — the real measure of CFD success.

Once a structured mesh is chosen, topology — how the grid wraps around the blade — becomes the next crucial decision. The topology determines how the mesh conforms to flow features, how well it resolves the boundary layer, and how cleanly it connects across the blade span. For wind turbine CFD, the most common structured topologies are C-grid, O-grid, and H-grid, each tailored to different flow regions and geometric needs.
The ideal topology follows the natural path of the flow. It avoids abrupt cell skewness, preserves orthogonality near the surface, and ensures uniform growth ratios in the wall-normal direction. For most turbine blades, that balance is best achieved using an O-grid around the airfoil surface, transitioning to a block-structured configuration near the hub and tip.
The O-grid wraps around the blade like a smooth envelope, providing excellent near-wall resolution and consistent spacing from leading to trailing edge. Its circular topology minimizes skewness and supports fine y⁺ control, vital for exact boundary-layer prediction.
According to Peralta et al. (2014), O-grids deliver better pressure recovery and flow stability than C- or H-grids, especially for curved turbine profiles.
At the blade tip, the flow accelerates and rolls into strong vortices. Here, a refined multi-block O-grid or hybrid cap structure keeps cells orthogonal and prevents extreme aspect ratios. This stability allows accurate modeling of tip-losses and wake shedding without numerical artifacts.
Near the root, where the blade meets the hub, geometry transitions become intricate. Multi-block H- or C-grid combinations handle intersecting surfaces and fillets effectively while maintaining structured order. Such topologies ensure smooth continuity between rotating and stationary zones — a key factor in reliable torque and load predictions.

Even the best topology needs careful refinement to achieve accurate aerodynamics. For wind turbine blades, surface resolution determines how faithfully the solver captures the boundary layer — where most aerodynamic forces are generated. The goal is to maintain consistent cell growth normal to the wall and keep y⁺ values below 1 for RANS or LES models, ensuring smooth transition from viscous sublayer to outer flow.
Proper refinement avoids under-resolved gradients that distort lift and drag predictions. According to a research study, refining near-wall spacing improves lift prediction by over 7%, underscoring that resolution quality often matters more than solver choice.
Laminar regions demand ultra-fine spacing to resolve gentle velocity gradients, while turbulent flows require stable y⁺ control to model shear stresses accurately. Structured meshes excel here by allowing exponential control of wall-normal spacing without introducing skewness.
Wind turbines frequently operate in mixed laminar–turbulent regimes. Capturing this transition correctly depends on mesh consistency along the suction surface. Structured O-grid and C-grid layouts preserve smooth growth rates and curvature alignment, enabling transition models to predict separation and reattachment naturally — essential for precise lift and stall prediction.
CFD meshing for wind energy is rapidly evolving toward automation, adaptability, and intelligence. The next generation of tools is expected to use AI-assisted block decomposition and machine learning-based refinement to generate optimal meshes automatically — minimizing user input while preserving structured quality.
Recent studies such as Arxiv.org, 2024 demonstrate how deep reinforcement learning can identify optimal grid spacing for flow through rotating blades, producing near–grid-converged results without manual tuning. Combined with adaptive meshing and digital twin integration, future workflows will enable meshes that evolve dynamically with real-time operating conditions — refining where vortices form or turbulence intensifies.
Structured and multi-block grids will remain central to this shift, serving as the stable, physics-consistent backbone for these intelligent automation systems — ensuring wind turbine simulations stay accurate, efficient, and ready for the era of predictive, data-driven design.
For modern wind turbine simulations, the mesh is more than a setup — it’s the physics engine’s blueprint. Structured and multi-block grids outperform unstructured ones not because they’re more elegant, but because they resolve the truth better.
By maintaining order, alignment and control, they deliver reliable predictions for lift, drag, torque and wake formation — the lifeblood of turbine design. Using block-structured meshing workflows founded on academic research, engineers can achieve that precision with speed and repeatability.
When accuracy meets efficiency, structure still leads.
Interested in Using GridPro for Your Wind Turbine Meshing Projects?
GridPro’s advanced multi-block structured meshing tools deliver the precision, efficiency, and scalability needed for aerodynamic simulations of turbine blades and rotors.
Click Here to Learn More or Request a Demo.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post How CFD Meshes Are Built for Modern Wind Turbine Blades: Why Structure Still Wins appeared first on GridPro Blog.
Figure 1: Structured hexahedral volute mesh using GridPro’s automation tool.
Word count: 1660
Discover how GridPro’s automation solution is revolutionizing volute mesh generation for turbochargers, pumps, and compressors. Explore the significance of volutes in engineering, the driving forces behind turbocharger design, and learn how structured meshes drive innovation. Streamline your engineering processes with GridPro’s innovative tool.
The shipping industry is propelled by a blend of economic, environmental, and regulatory pressures. Regulations such as the International Maritime Organization’s (IMO) MARPOL Annex VI, limit sulfur oxides (SOx) and nitrogen oxides (NOx) emissions from ships, compelling the industry to adopt cleaner, more efficient technologies.
Also, rising fuel costs and regulations like the Energy Efficiency Design Index (EEDI) mandate improved fuel efficiency in ship designs. Moreover, the quest for operational efficiency and competitive advantage fuels innovation, prompting shipbuilders and operators to seek cutting-edge solutions.

The need for efficient, powerful, and environmentally friendly ship engines is significantly influencing the design and development of turbochargers. Turbochargers, essential for improving engine efficiency and reducing emissions, must evolve to meet these new challenges. They need to be more durable, efficient, and environmentally friendly, pushing engineers to innovate continually.
One key area of improvement in turbocharger performance is the design of the volute. The volute, a spiral casing that guides exhaust gases into the turbine, plays a crucial role in the turbocharger’s efficiency. Optimizing the shape and size of the volute can lead to significant performance gains, including higher pressure ratios, improved airflow, and reduced energy losses. This translates into tangible benefits for ship operators.
CFD plays a pivotal role in the design and development of turbochargers. CFD allows engineers to simulate and analyze fluid flow, heat transfer, and other physical phenomena within the turbocharger, providing insights that are impossible to obtain through traditional testing methods. It enables to identify performance bottlenecks, optimize the shape and size of volutes and other components and explore novel design concepts without extensive physical prototyping. This accelerates the development process and leads to more effective and efficient turbocharger designs.
To further boost the design and development of turbochargers, GridPro is introducing the volute mesh automation. Utilizing advanced meshing algorithms, it creates precise, high-quality meshes for CFD simulations, reducing manual effort significantly. This automation ensures consistent mesh quality, facilitating accurate CFD analyses. Engineers can iterate on volute designs rapidly, optimizing performance and accelerating the overall development of turbochargers and volutes. This ultimately results in better-performing turbochargers tailored to meet the stringent demands of the shipping industry.
The volute in a turbocharger plays a crucial role by guiding exhaust gas flow from the engine towards the turbine blades, where the exhaust gas’s energy is converted into mechanical energy to drive the compressor. This function is vital, as the volute’s design significantly impacts the efficiency and flow characteristics of the turbine under various operating conditions.

A well-designed volute with an optimal cross-sectional shape is essential for providing uniform flow to the rotor at the desired angle. This uniformity maximizes energy recovery and enhances the efficiency of the turbocharger turbine. The cross-sectional shape of the volute directly influences the direction and magnitude of the flow at the turbine rotor inlet, affecting the overall efficiency of the turbocharger. An optimized volute design can lead to improved cycle-averaged efficiency, especially under the pulsating flow conditions typical of internal combustion engine exhausts. Enhanced efficiency results in better energy recovery from the exhaust gas, thereby increasing the engine’s power density.
Moreover, the cross-sectional shape of the volute impacts secondary flow patterns and the development of vortices within the volute. For instance, a volute designed to produce smaller vortices will exhibit faster response times and superior performance under pulsating conditions compared to one with larger vortices, which have more inertia and respond more slowly.
Different volute designs can lead to varying levels of total pressure loss and flow distortion. A volute with a sharper corner and flatter cross-sectional shape can enhance secondary flow development, resulting in higher pressure losses and more distorted flow at the rotor inlet, which deteriorates the turbine’s performance. Conversely, optimized volute shapes can reduce these losses and improve flow uniformity, contributing to better overall performance.

Additionally, the volute’s design determines its sensitivity to the pulsating nature of the exhaust flow. A well-designed volute can maintain a more stable and predictable flow pattern even under unsteady conditions, which is crucial for maintaining high efficiency and performance in real-world operating conditions of internal combustion engines.
Basically, the volute is a critical component in a turbocharger that significantly affects its performance by influencing flow patterns, efficiency, pressure losses, and sensitivity to pulsating flow conditions. Optimizing the volute design can lead to marked improvements in turbocharger efficiency, power density, and overall engine performance.
The evolution of turbocharger design is steered by a multitude of factors, each contributing to the relentless pursuit of innovation and efficiency across the automotive, shipping, and aerospace industries.
A primary force behind this evolution is the increasingly stringent emissions standards worldwide. Manufacturers are under pressure to develop turbochargers that enhance engine thermal efficiency and significantly reduce CO2, NOx, and particulate matter emissions. Turbocharging plays a pivotal role in meeting these regulatory requirements by improving combustion efficiency.
Concurrently, the growing emphasis on fuel economy and sustainability is pushing turbocharger designs to focus on enhancing engine efficiency. The goal is to increase power output without a significant rise in fuel consumption.

This is particularly important as the trend towards downsizing engines continues to gain traction. By reducing engine size while maintaining or improving performance, turbochargers enable smaller engines with reduced displacement to produce the same or higher power outputs. This is achieved by increasing the air intake pressure, thereby enhancing volumetric efficiency. When downsizing is combined with downspeeding—operating at lower engine speeds—fuel consumption is further reduced, and vehicle weight is minimized.
Despite these advancements, traditional turbochargers often suffer from slow transient response, which negatively impacts vehicle drivability and acceleration. To address this issue, innovations such as electrically-assisted turbochargers are being developed. These new designs improve response time without causing parasitic losses to the engine, a crucial improvement for maintaining the performance and attractiveness of turbocharged vehicles.
Intense competition in the automotive industry further motivates manufacturers to continuously innovate turbocharger designs, striving to stay ahead in terms of performance, efficiency, and reliability. This competitive drive is closely linked to the need to meet specific market demands and customer expectations. As a result, there is a strong focus on developing turbochargers that offer better drivability, fuel efficiency, reduced turbo-lag, and overall improved vehicle performance.
Technological advancements play a significant role in this ongoing evolution. Progress in materials, manufacturing processes, and computational fluid dynamics (CFD) has enabled the development of more efficient and responsive turbochargers. Innovations in materials and structural design contribute to the longevity and reliability of turbochargers, allowing them to withstand high temperatures and high-stress conditions.
In essence, turbocharger design refinement is a dynamic interplay of regulatory pressures, performance demands, technological innovations, market dynamics, and environmental consciousness, all converging to shape the future of propulsion engines.
Structured meshes play a pivotal role in propelling the enhancement of volutes and turbochargers, influencing various factors driving design improvements. Firstly, they elevate the accuracy of simulation results, enabling precise predictions of fluid flow behaviours like pressure distributions and velocity profiles. This insight aids engineers in pinpointing areas for enhancement and refining design parameters.
Secondly, structured meshes deepen the comprehension of flow physics within these components, identifying phenomena like flow separation and vortices. This comprehension inspires innovative design concepts and optimization strategies geared towards boosting performance and efficiency.

Moreover, structured meshes facilitate parametric studies, allowing engineers to systematically optimize geometric parameters while maintaining mesh quality. This exploration of the design space leads to the discovery of optimal configurations aligned with performance objectives.
Additionally, these meshes aid in evaluating aero-thermal performance and mitigating flow instabilities, contributing to more robust and reliable designs. They also support the validation of design concepts by providing accurate predictions for comparison with experimental data, reducing development time and accelerating innovation.
In essence, structured meshes serve as the foundation for accurate simulations, fostering deeper understanding, optimization, and validation processes that collectively drive advancements in volute and turbocharger designs.

Traditionally, the process of generating high-quality meshes for volutes has been labour-intensive and time-consuming, requiring manual intervention and expertise in meshing software. However, with the unveiling of GridPro’s latest innovation, this cumbersome process is now a relic of the past.
GridPro has introduced an automation tool designed to effortlessly generate topology and mesh for volute geometries. Through its intuitive workflow and robust meshing algorithms, GridPro streamlines the mesh generation process. The algorithm seamlessly generates topology and meshes on volutes with unparalleled efficiency and precision.
Whether it’s creating structured meshes for volutes with intricate geometries or optimizing mesh density for turbocharger simulations, GridPro empowers engineers to achieve superior results with minimal computational overhead.

GridPro’s automated solution for volute geometries represents more than just a technological advancement; it embodies a paradigm shift in engineering design and simulation. By harnessing the power of automation, engineers can transcend the limitations of manual mesh generation, unlocking new possibilities in product development and optimization.
Gone are the days of laborious meshing processes and tedious iterations. With GridPro’s innovation as their ally, engineers can embrace a future of seamless design, where creativity and efficiency converge to propel projects forward.
Ready to Automate Your Meshing Workflow?
Gridpro Xpress Volute
GridPro’s intelligent structured meshing automation solution reduces manual effort and maximizes accuracy—making it ideal for design optimization in CFD.
Schedule a free demo or contact us to see how GridPro can accelerate your simulation pipeline.
In conclusion, GridPro’s automated solution for volute geometries heralds a new era of efficiency and productivity in engineering design. By streamlining the volute mesh generation process and eliminating manual labour, the tool empowers engineers to focus their expertise and creativity on solving complex challenges and driving innovation forward.
As the demands of modern engineering continue to evolve, GridPro remains at the forefront of technological innovation, delivering solutions that redefine the boundaries of possibility. With GridPro’s automation tool, the journey from concept to realization becomes smoother, faster, and more rewarding than ever before.
1. Discover GridPro Xpress Volute
2. “An investigation of volute cross-sectional shape on turbocharger turbine under pulsating conditions in internal combustion engine”, Mingyang Yang et al, Energy Conversion and Management 105 (2015) 167–177.
3. “The impact of volute aspect ratio and tilt on the performance of a mixed flow turbine”, Samuel P Lee et al, Proc IMechE Part A: J Power and Energy 2021, Vol. 235(6) 1435–1450.
4. “Unsteady behaviours of a volute in turbocharger turbine under pulsating conditions”, Mingyang Yang et al, J. Glob. Power Propuls. Soc. | 2017, 1: 237–251.
5. “The Effect of Volute Design On The Performance Of A Turbocharger Compressor”, A. Whitfield et al, International Compressor Engineering Conference. Paper 1501.
6. “Important Considerations When Designing a Volute”, an article by Jamin Bitter.
7. “How Turbocharger Design is Changing as Car Firms Chase Efficiency”, an article in the website secotools.com
8. “ Turbochargers for higher engine efficiency”, article by Lucie Maluck.
9. “Downsized, boosted gasoline engines”, Aaron Isenstadt and John German (ICCT) et al, INTERNATIONAL COUNCIL ON CLEAN TRANSPORTATION, 2016.
10. “Variable Geometry Turbocharger Technologies for Exhaust Energy Recovery and Boosting‐A Review”, Adam J. Feneley et al, Renewable and Sustainable Energy Reviews 71 (2017) 959–975.
11. “Multi-objective optimization of turbocharger turbines for low carbon vehicles using meanline and neural network models”, Prakhar Kapoor et al, Energy Conversion and Management: X 15 (2022) 100261.
12. “A Review of Novel Turbocharger Concepts for Enhancements in Energy Efficiency”, A. Kusztelan et al, Int. J. of Thermal & Environmental Engineering Volume 2, No. 2 (2011) 75-82.
13. “Electric Turbocharging for Energy Regeneration and Increased Efficiency at Real Driving Conditions”, Pavlos Dimitriou et al, Appl. Sci. 2017, 7, 350; doi:10.3390/app7040350.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Automatic Structured Hexahedral Meshes for Volutes appeared first on GridPro Blog.
Figure 1: Structured multi-block meshing of a heat pump compressor.
1209 words / 6 minutes read
Shifting to low-GWP refrigerants is reshaping centrifugal compressor design for heat pumps. This blog reveals how structured hexahedral meshing with GridPro empowers CFD engineers to tackle real-gas effects, optimize performance, and accelerate innovation—ensuring sustainable, high-efficiency compressors that meet the challenges of a changing HVAC landscape.
As global energy policies tighten and environmental awareness rises, the HVAC and energy industries are shifting toward low-Global Warming Potential (GWP) refrigerants. In this evolving landscape, centrifugal compressors used in heat pumps are undergoing critical redesign. These systems must now meet higher performance and sustainability standards while accommodating complex thermodynamic behaviors. CFD engineers and R&D managers play a pivotal role in ensuring these compressors adapt to new refrigerants without compromising efficiency or reliability. Structured meshing, particularly hexahedral multiblock techniques, is emerging as a powerful enabler of accurate CFD simulations tailored to this transformation.
The push to replace high-GWP refrigerants like R134a is driven by both regulatory mandates and sustainability goals. New options such as R1234yf, R1234ze(E), CO₂, and ammonia offer significantly lower environmental impact. However, these alternatives introduce design complications, including altered thermophysical properties, higher pressures, and non-ideal fluid behaviors. Transitioning to these refrigerants isn’t as simple as swapping fluids; it requires a re-evaluation of compressor design to ensure optimal thermal and mechanical performance.
CFD simulations play a central role in assessing how different refrigerants impact efficiency, mass flow rate, pressure ratio, and power requirements. These metrics are influenced by a refrigerant’s specific heat ratio, compressibility, and speed of sound. For instance, R1234ze(E), with a lower speed of sound than R134a, can result in higher pressure ratios at the same rotational speeds but may require adjustments to blade angle and diffuser geometry to maintain performance. These evaluations are only reliable when supported by high-fidelity meshes that resolve critical features of the compressor flow path.

Accurate CFD modeling enables engineers to simulate real-world operation and iterate compressor designs quickly. With low-GWP refrigerants, real gas effects become prominent and must be incorporated using equations of state like Peng-Robinson or Redlich-Kwong. Property libraries such as NIST REFPROP or CoolProp are commonly integrated into CFD workflows to provide refrigerant-specific data.
CFD also supports flow visualization and loss analysis, helping engineers refine impeller and diffuser shapes to reduce aerodynamic losses. Identifying zones of high entropy generation, flow recirculation, or separation enables targeted geometric modifications. For example, blade tip leakage losses and flow detachment in the diffuser region are critical for compressors using refrigerants like CO₂ or hydrocarbons operating at high pressures. Well-resolved CFD studies guide designers in mitigating these effects through blade curvature optimization, hub contouring, or diffuser vane adjustments.
Moreover, system-level performance can be predicted through CFD results linked with thermodynamic cycle simulators. This integration allows teams to evaluate how compressor performance translates into system COP under variable operating conditions. Such an approach provides a holistic view and supports informed decision-making on refrigerant choice and component sizing.
In the CFD workflow, mesh generation directly influences the accuracy, stability, and convergence of the simulation. In centrifugal compressors, flow behavior is strongly affected by complex geometry and rotating machinery effects. Components such as impellers with main and splitter blades, tight tip clearances, and curved volute channels introduce abrupt velocity gradients, secondary flows, and shocks.
Mesh resolution must be high enough to capture boundary layers, pressure gradients, and thermal interactions. Especially in wall-bounded regions, achieving target y+ values (typically <1) is necessary to ensure compatibility with turbulence models like k-ω SST. This is vital when simulating flow separation or heat transfer within the volute or impeller shroud.
Mesh quality is equally important when working with real gas models. Rapid property changes due to pressure and temperature fluctuations can lead to convergence issues if the mesh is distorted or insufficiently refined. A mesh with good orthogonality, low skewness, and gradual stretching supports robust simulations even under these challenging conditions.
A well-executed grid independence study further enhances credibility by verifying that simulation outputs remain consistent across different mesh densities. Balancing computational cost and accuracy, such studies help teams standardize mesh sizes while maintaining trust in the results.
Structured hexahedral meshes are ideal for turbomachinery simulations because they offer higher numerical accuracy and control than unstructured meshes. By aligning elements with the main flow direction, they minimize interpolation errors and numerical diffusion, which is particularly advantageous in high-gradient regions near blade surfaces.
They also facilitate cleaner layering near walls, enabling more reliable use of wall-resolved turbulence models. This becomes especially important when analyzing centrifugal compressors operating under transonic or off-design conditions, where minor differences in wall shear can influence performance and efficiency.
In post-processing, structured meshes allow engineers to interpret simulation results more clearly. Streamlines, pressure contours, and velocity vectors derived from well-ordered grids yield more consistent visualizations, helping teams identify flow anomalies and validate design improvements. The predictability and stability of structured meshes also reduce solver crashes and improve convergence speed—benefits that accumulate over repeated design cycles.

GridPro provides a specialized platform for structured hexahedral mesh generation, optimized for complex geometries like those in centrifugal compressors. Its topology-based approach allows engineers to define reusable block templates that can be adapted to different impeller shapes, diffuser configurations, or refrigerant conditions. This flexibility accelerates geometry-to-mesh workflows, making it easier to manage design iterations.
One of GridPro’s key strengths lies in boundary layer control. With fine resolution settings, engineers can maintain strict y+ targets while smoothly transitioning from near-wall elements to the outer domain. This is particularly useful when working with turbulence models and wall heat transfer, both of which are critical for compressors handling refrigerants with large thermal gradients.
GridPro also supports wake refinement and shock-fitting capabilities. These features are essential for accurately capturing flow structures behind blade trailing edges and in regions of sudden expansion or compression. For example, in compressors operating with high-pressure refrigerants, these mesh refinements help capture oblique shocks and shear layers without excessive numerical dissipation.
GridPro also offers an automation solution specifically valuable for centrifugal compressor design: GridPro Xpress Blade. This tool enables automatic generation of structured multiblock meshes for impeller blades, streamlining the creation of high-quality meshes that align closely with blade geometry. Xpress Blade is programmed to produce solver-ready meshes with minimal manual input. For engineers performing iterative simulations across varying blade profiles or refrigerants, this tool significantly shortens meshing time without compromising grid fidelity. Its ability to consistently generate mesh blocks around blades, splitters, and trailing edge regions enhances wake capture and overall mesh convergence. As a result, Xpress Blade helps integrate mesh generation seamlessly into automated design and optimization workflows.
GridPro meshes are compatible with major CFD solvers like ANSYS CFX, Fluent, OpenFOAM, and STAR-CCM+, which streamlines downstream simulation efforts. Engineers can also incorporate GridPro meshes into automated parametric studies and optimization frameworks using Python or third-party integration tools, ensuring scalability across projects.

The transition to low-GWP refrigerants in centrifugal compressor applications brings with it a set of complex engineering challenges. Meeting performance goals while ensuring sustainability and compliance requires a deep integration of CFD, real gas modeling, and high-quality structured meshing.
Structured hexahedral meshes—and specialized tools such as Xpress Blade—provide the fidelity and flexibility necessary to simulate and optimize modern compressor designs. For engineers and R&D leaders in the heat pump sector, investing in robust meshing strategies is a foundational step toward reliable, efficient, and future-ready product development.
1. “Centrifugal compressor design and cycle analysis of large-scale high temperature heat pumps using hydrocarbons“, Antti Uusitalo et al, Applied Thermal Engineering 247 (2024) 123035.
2. “Design and CFD analysis of centrifugal compressor and turbine for supercritical CO2 power cycle“, Ashish Chaudhary et al, The 6th International Symposium-Supercritical CO2 Power Cycles, March 27-29, 2018, Pittsburgh, PA.
3. “DESIGN AND OPERATION OF A CENTRIFUGAL COMPRESSOR IN A HIGH TEMPERATURE HEAT PUMP“,
Benoît Obert et al, 5th International Seminar on ORC Power Systems, September 9 – 11, 2019, Athens, Greece.
4. ” Combining Thermodynamics-based Model of the Centrifugal Compressors and Active Machine Learning for Enhanced Industrial Design Optimization“, Shadi Ghiasi et al, 1st workshop on Synergy of
Scientific and Machine Learning Modeling, SynS & ML ICML, Honolulu, Hawaii, USA. July, 2023.
5. ” Study of Performance Changes in Centrifugal Compressors Working in Different Refrigerants“, YintaoWang et al, Energies 2024, 17, 2784.
6. “Design of centrifugal compressors for heat pump systems“, Meroni, Andrea et al, Applied Energy, 232, 139-156.
7. “The Characteristic of High-Speed Centrifugal Refrigeration Compressor with Different Refrigerants via CFD Simulation“, Kuo-Shu Hung et al, Processes 2022, 10, 928.
8. “Energy Characteristics of the Compressor in a Heat Pump Based on Energy Conversion Theory“, Yingju Pei et al, Processes 2025, 13, 471.
9. “CFD Simulation of a Centrifugal Compressor using Star-CCM+“, SAI ANIRUDH RAVICHANDRAN, Master’s thesis in Applied Mechanics, CHALMERS UNIVERSITY OF TECHNOLOGY, Göteborg, Sweden 2022.
10. “Design of the first stage of a centrifugal compressor with R1234ze(E) for heat pump in district heating“, Fois, Antonio, THRUST Master of Science, Master Thesis Project Report, 30 CET, Universitè de Liege Faculty of Applied Sciences Academic year 20202021.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post From Grid to Green: Hexahedral Meshing for Low-GWP Centrifugal Compressor Designs in Heat Pumps appeared first on GridPro Blog.
Figure 1: Space debris research: Structured multi-block mesh for a satellite. Image source – Mesh was generated by our French Distributor – R.Tech.
1581 words / 8 minutes read
What happens when space junk falls back to Earth—and how can we predict the impact before it’s too late?
With thousands of defunct satellites and rocket fragments orbiting Earth, space debris poses a serious threat. This article uncovers how cutting-edge CFD simulations and intelligent meshing strategies are being used to predict the reentry behavior of this debris—helping prevent disasters and protect both space assets and life on Earth.
As space activity intensifies, Earth’s orbit is becoming increasingly cluttered with defunct satellites, spent rocket stages, and mission-related fragments—collectively referred to as space debris. These objects, once they complete their orbital life, often re-enter the atmosphere in unpredictable and dangerous ways. Understanding how this debris behaves during atmospheric reentry is critical for safeguarding both space assets and lives on the ground.
Computational Fluid Dynamics (CFD) has become a vital tool for simulating the complex flow and thermal environments experienced by these objects. However, the challenges of modeling irregular debris shapes, rapidly changing geometries, and dynamic trajectories require not only robust simulation techniques but also intelligent meshing strategies.
This article explores the need for space debris research, the role of CFD in this domain, the challenges it entails, and how tools like GridPro help address the meshing demands essential to such high-fidelity simulations.
Research on space debris has gained urgency due to the growing threat it poses to satellite operations and public safety. As the number of man-made objects in orbit increases, so does the risk of collision and uncontrolled atmospheric reentry. In a worst-case scenario, known as the Kessler syndrome, a cascade of collisions could render certain orbits unusable. Moreover, as more debris is projected to re-enter the atmosphere in the coming years, predicting which objects will burn up and which might survive to reach Earth’s surface has become a major concern.
International guidelines, such as those from NASA’s Orbital Debris Program Office, stipulate that re-entering debris should pose no more than a 1 in 10,000 chance of causing harm on the ground. Current predictive models often fall short of this accuracy. Many use simplified geometries and outdated correlation models that underestimate heat rates or overestimate drag, resulting in uncertain survivability predictions.
To address these limitations, researchers are developing new methodologies that combine automated CFD computations, normalization techniques, and machine learning to create more reliable and comprehensive tools for assessing reentry risks.

CFD plays a foundational role in space debris research by providing high-fidelity data on aerodynamic characteristics and heat rates. This information is crucial for determining how a piece of debris will behave during atmospheric reentry, including its trajectory, velocity, angle of impact, and potential for ground damage. Traditional models, such as modified Newtonian theory, often fall short in accurately capturing complex flow phenomena, especially around concave or irregular geometries. CFD offers a superior alternative by simulating these intricate interactions with a high level of detail.
Modern CFD methods are capable of handling thousands of simulations across a wide array of shapes and flow conditions. These simulations often account for phenomena such as shock interactions, random tumbling motions, changes in wall temperature, and geometry transformations due to ablation.
Many CFD solvers solve full 3D Navier-Stokes or Euler equations and can model thermochemical non-equilibrium gas compositions typical of high-altitude reentry scenarios. Databases of non-dimensional parameters, such as drag coefficients and shape factors, are generated to aid in faster yet accurate risk assessments. CFD results are then validated using experimental data from hypersonic wind tunnels and free-flight testing, providing critical input for improving certification tools.
Despite its advantages, the use of CFD in space debris simulations is not without significant hurdles. High-fidelity simulations are computationally intensive. A single simulation involving a six-degree-of-freedom model can take anywhere from 30 to 60 CPU-hours, making large-scale probabilistic assessments impractical using conventional approaches.
The tumbling nature of debris during atmospheric reentry introduces another layer of complexity, as the aerodynamic response varies significantly with object orientation. This requires simulations to be performed across numerous attitudes to capture an accurate average response.
Moreover, the diversity in debris shapes—from hollow hemispheres to irregular fragments—poses a challenge for both modeling and simulation. These objects may undergo ablation, changing their geometry mid-flight, which complicates the simulation further. Accurately capturing the interaction between the flow and these complex surfaces necessitates high-quality meshes.
Simulating thermal behavior, shock-shock interactions, and catalycity adds even more to the computational burden. To manage this, researchers are increasingly relying on normalized databases and advanced interpolation methods, which allow the reuse of CFD results across different scenarios without rerunning the entire simulation set.

Generating accurate and efficient meshes is one of the most challenging aspects of CFD simulations for space debris. Debris objects are often irregular, with sharp edges, cavities, or thin structures that demand high mesh resolution to capture essential features. However, increasing mesh resolution significantly raises computational costs and can make subsequent data-driven models, such as neural networks, too complex to be practical. Striking the right balance between detail and efficiency is crucial.
Another challenge lies in the need to maintain a static mesh structure when using deep learning models. Once a mesh is created for a specific object, it cannot be easily modified to represent different sizes or shapes without disrupting the model’s structure. This limitation becomes particularly problematic when trying to simulate objects that undergo shape changes during ablation.
Furthermore, mesh quality must be high enough to ensure convergence of the numerical methods used in CFD, especially in regions with strong gradients such as heatshield shoulders. Under-meshing in these regions can lead to inaccurate predictions of temperature and pressure distributions, potentially compromising the entire simulation.
To conduct meaningful CFD simulations on space debris, the mesh must meet several critical requirements. It needs to accurately capture the geometry of complex shapes and resolve important flow features such as shock interactions, recirculation zones, and expansion fans. The type of mesh used can vary depending on the simulation method. Unstructured meshes, are favoured for their flexibility and local control over mesh density. Cartesian meshes, valued for their fast generation time and compatibility with automated simulations, are also widely used.
However, structured meshes—particularly multi-block structured meshes—are gaining popularity due to their ability to deliver high accuracy with fewer cells. These meshes are easier to validate for grid convergence and allow for efficient simulation across varying angles of attack using techniques like the rotating mesh approach. For scenarios involving machine learning, a single mesh is often used for all possible orientations, with the outer boundaries typically designed as spheres to ensure accurate wake modeling. Reusability is another key factor; once a high-quality grid topology is established, it can be reused for similar shapes, significantly reducing meshing time for future simulations.

Structured meshes offer several advantages that make them highly suitable for space debris CFD simulations. They enable the efficient resolution of complex flow features with fewer computational resources by maintaining a uniform grid quality. This type of mesh ensure high-quality results with a minimum number of cells. The block-based nature of these meshes allows them to adapt to various shapes without losing accuracy, making them ideal for modeling irregular debris geometries.
Structured meshes also support innovative modeling techniques such as the rotating mesh approach, where a single topology can be used to simulate various orientations of a tumbling object. This eliminates the need to generate new meshes for each attitude and help to automate the database generation from CFD computations. Overall, structured meshes strike a desirable balance between precision, flexibility, and computational efficiency, making them invaluable in the context of space debris modeling.
GridPro plays a crucial role in facilitating high-quality CFD simulations of space debris by streamlining the meshing process and enhancing the overall efficiency of simulations. Its ability to generate massively multi-block structured meshes allows for the precise modeling of complex and irregular geometries commonly found in space debris. The grids produced are of consistently high quality, which is essential for ensuring numerical convergence and accurate resolution of physical phenomena during reentry.
One of the standout features of GridPro is its support for the rotating mesh approach. This enables researchers to simulate a complete range of angles of attack using a single mesh, significantly reducing the time and effort required to prepare for each new simulation scenario. Additionally, GridPro’s reusability feature allows users to apply the same grid topology to multiple objects with similar geometric layouts, further enhancing efficiency and consistency across simulations.
The block structure of the meshes also supports effective parallelization, making it easier to deploy simulations on high-performance computing clusters. This capability is particularly valuable when running hundreds or thousands of simulations for probabilistic analysis or database generation. Overall, GridPro acts as a critical enabler in the CFD workflow for space debris research, bridging the gap between geometric complexity and computational feasibility.
Space debris reentry poses significant risks that demand accurate and efficient predictive tools. CFD has proven to be a powerful method for capturing the complex aerodynamics and thermal behavior of re-entering objects, but the success of these simulations hinges on the quality and structure of the underlying mesh. Structured meshes, especially those generated with tools like GridPro, offer the precision, adaptability, and computational efficiency necessary for tackling the unique challenges presented by space debris.
As research continues to evolve, combining high-fidelity CFD data with advanced meshing strategies and machine learning will be essential for developing the next generation of risk assessment and mitigation tools. By doing so, the scientific community takes a critical step toward ensuring safer and more sustainable use of Earth’s orbital environment.
This article is an outcome of the extensive work done on Space Debris by our French Distributor – R.Tech. We thank them for their valuable contribution to this article.
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Unraveling Space Debris Reentry with CFD and Structured Meshing appeared first on GridPro Blog.
Figure 1: Structured multi-block meshing of a volute.
2200 words / 11 minutes read
With the increase in stricter pollution policies, eco-conscious customers, the demand for low carbon emission and energy-saving vehicles is steadily growing. With electric vehicles been the rage of town, the fossil fuel-driven automotive industry has upped its ante by building highly efficient and downsized engines. This has been made possible by the improvements in turbochargers design. Turbochargers enable a significant reduction in internal combustion engine size, reduce fuel consumption and emission levels. Further, they also increase the engine rate, the limiting torque curve, and the torque back-up.
Over the years, the understanding of the flow field and design of turbochargers has improved considerably. A major part of the industrial effort has been to increase the performance and reduction of losses in the impellers and vaned diffusers of the compressor and the turbine in turbochargers. However, insufficient attention has been paid to the flow diffusion in the compressor volutes, the losses incurred, and the influence of volutes on the overall performance of the compressors.
With the latest trend for compact engines and installation constraints, manufacturers are forced to reduce the turbocharger size. This means compact volute designs with the flow leaving the volute exit with considerable kinetic energy. Hence a more careful evaluation of the flow diffusion, ways to reduce losses, and improve performance by understanding the influence of geometric parameters in volutes is certainly needed.
The flows inside volutes are complex and multiple parameters influence them. Hence, it is no wonder that there is little consensus among designers as to what an optimized volute geometry looks like. Adding to this lack of clarity, volutes designed by different approaches give different pressure ratios and efficiency.
Extensive research work has gone into understanding and improving impellers and vaned diffusers in compressors and turbines. On the contrary, volutes are the least investigated and less understood component. Yet this part plays an important role in the compressor’s functioning.
The volutes strongly influence the compressor’s overall performance, stability limits, operating range, and pressure distortion at off-design conditions. Further, rather than the impeller, volutes determine the location of the point of best efficiency of the compressor.
Small changes in volute design can have a significant global impact on the performance of the compressor. For example, shortening of the volute tongue can impact the volute performance which in turn influences the global performance of the radial compressor. Or for that matter, the dilemma w.r.t the choice of volute geometry – symmetrical or overhung, makes design decisions perplexing. These design issues are just the tip of the iceberg, one will confront while designing volutes.
Hence, it is critically important to understand the influence of design parameters on volute and compressor performance. Also, even though the loss in volutes is less telling than that in the impeller or vaned diffusers, the potential for improvement is still sizable. Any small reduction in losses by design modifications does make a meaningful impact.
Lastly, flows in turbocharger centrifugal compressors by definition are non-axisymmetric due to the asymmetric nature of volute geometry, especially in off-design conditions. This results in the generation of pressure distortions in a circumferential direction which worsens the compressor’s stability and performance. This is a critical issue and it needs some serious attention.
Knowing the current trend in internal combustion engine downsizing, the stability of turbocharger centrifugal compressors is a major worrying issue. Since these pressure distortions are produced in the volutes, its design has lately gained larger attention as it considerably affects turbocharger performance.
Volutes, for smaller mass flow rates, acts as a diffuser. This causes a rise in static pressure from the tongue to the volute exit. However, at larger mass flow rates, the volute becomes too small and the flow accelerates from the tongue to volute exit.
Back pressure disturbances develop in the volute more specifically at the tongue region, which propagates upstream and influences the flow at the diffuser and impeller exit. This results in extra losses and pressure distortions around the impeller periphery. Further, these pressure distortions reduce the stage performance and have a direct impact on the impeller and diffuser flow stability.
An offset of the development of circumferential pressure distortion is the creation of radial forces on the impeller shaft, which sometimes lead to failure of the shaft bearings. Also, the circumferential non-uniformity of the flow at the impeller exit causes mixing losses in the diffuser. And lastly, this cyclic variation in the impeller channels at each rotation results in additional energy dissipation.
The repercussions of these pressure fluctuations caused by the volute-impeller interaction can be felt through elevated levels of noise and vibration. This is especially so near the tongue region. Hence, understanding the flow field distortion and ways to modify the volute design to reduce this distortion is of critical importance.

A number of volute’s geometrical parameters influence the compressor’s performance, stability, and operating range. Out of an array of parameters, 5 parameters namely, cross-sectional area distribution, cross-sectional shape, the radial position of cross-section, location of volute inlet, and tongue geometry, have been recognized by many researchers as the major influential ones.
These parameters are related to the flow characteristics and losses inside the volute and hence directly impact the overall compressor performance. The following sections elaborate on these geometrical parameters in greater detail.
Studies have shown that volutes with cross-sectional area increasing circumferentially display better efficiencies and pressure ratios when compared to constant area cross-sections. In particular, linear increase in area provides the best head and efficiency. This is because volutes with increasing area produce uniform pressure distribution at design conditions. However, at off-design conditions, large pressure distortions are observed.
At low flow rates, the volute cross-sectional area is large. As a consequence, the flow initially decelerates causing a rise in static pressure, but later at the tongue, the pressure drops suddenly. On the other hand, at very high flow rates, the volute area is too small which causes a decrease in pressure as the fluid accelerates in the circumferential direction, but later at the tongue, the pressure suddenly increases.

Moving further, if we just consider the change in cross-sectional area, studies reveal that at low flow rates, a larger volute cross-sectional area significantly decreases the maximum rise in pressure coefficient but increases the maximum flow coefficient of the compressor. Further, the maximum efficiency reduces by up to 2% with the increase in area, and it gets shifted to higher flow rates.
The shape of the cross-section has an influence on volute losses. On closer observation, it is seen that modification in cross-section shape affects the operating range more than the peak efficiency. Volutes with various shapes including circular, semi-circular, elliptic, rectangular, and square have been experimented with and their influence closely assessed.

Out of these different cross-sections, volutes with circular cross-sections exhibit lower wall friction and mixing losses as they have a smaller wetted area. Also, it is observed that, with circular cross-sections, it is possible to eliminate the secondary vortices developing inside the volute.
Volutes with square or rectangular sections being easy to manufacture are often used. However, volutes with square cross-sections are worst than circular ones as they have increased flow losses. For the same reason, the rectangular sections are inferior compared to square cross-sections.
Studies experimenting with volute inlet location have shown that tangential inlets ( Figure 5b ) are more efficient compared to symmetric volutes ( Figure 5a). It is observed that asymmetrical shapes provide a larger stable operating range, higher mass flow rate, and higher pressure coefficient.

The reason for this is, tangential inlets produce a single vortex while symmetric volutes produce a double vortex structure. When there is a double vortex, the distance between the opposite flow direction is reduced and the radial velocity gradients also increase close to the diffuser outlet. Both these effects result in an increase of shear stress and hence higher losses in symmetrical volutes.

Variation in the radial position of the volute channel results in an increase or decrease in losses thereby influences the compressor performance. From the principles of conservation of angular momentum, tangential velocity in a swirling flow is inversely proportional to the radius. If in a volute, the volute channel is positioned above the diffuser at a radius smaller than the diffuser outlet, then the tangential velocity inside the volute channel is higher than that at the diffuser outlet. This results in additional losses and undesired static pressure drop.
If we keep the cross-sectional shape and circumferential variation of the cross-sectional area constant and only increase the volute channel to a larger radius, a reduction in loss coefficient of up to 30% is observed.

On the other hand, for an internal volute with a cross-section radius smaller than the diffuser exit radius, a high loss-coefficient is observed for the entire operating range. In fact, the losses are very high as in collectors.
Further, it is noticed that a small variation in the radial position of the volute tongue w.r.t exit diffuser cone has an impact on the global efficiency of the volute. An increase in the radial position of the volute tongue increases efficiency.
Volute positional variations in the axial direction are usually presented as symmetric volutes and overhung volutes. For applications like the aerospace field where installation constraints such as space and weight are high, overhung volutes are employed.

Experiments and CFD simulations have shown that asymmetric volutes exhibit higher efficiencies than symmetric ones. This is because symmetric volutes are vulnerable to larger blockages at the inlet due to the generation of double swirl vortices. Also, a much stronger mixing process is observed to occur in symmetric volutes than overhung volutes, leading to higher losses.

When compared with forward and symmetry volutes, backwardly installed volutes (overhung volutes) show lower total pressure loss and higher static pressure recovery. Also, the speed asymmetry coefficient in backwardly installed volute exit is also slightly higher than that at the forwardly or symmetry-installed volute exits. Further, the backwardly installed volute has a more reasonable and uniform velocity field and better overall performances under different conditions. This is mainly because of the non-uniform outlet flow at the diffuser.
The radial clearance between the volute tongue and the impeller has a significant impact on the pressure distribution inside the volute and hence on the stage performance. Usage of a larger radial clearance distance leads to a reduction in interaction between the volute tongue and impeller. This helps in forming a better smoothened circumferential pressure variation in the tongue region.

However, increasing the radial clearance increases the recirculating flow in the gap between the tongue and the impeller and decreases the volute exit cross-section.
Normally, a volute with zero clearance gives a good volute performance at the design point. However, they are less adaptive to flow condition variations. Smoothening the leading edge of the volute tongue or in other words, providing a radial clearance helps the volute to achieve better efficiencies and stable flow range under off-design conditions also.

To know the optimal radial clearance for the entire flow range for a volute, the gap clearance can be modified either by changing the length of the tongue or by changing the tongue angle. In other words, the position and shape of the tongue become the influencing geometric variable affecting the machine performance. Rounding and retracting the tongue (shortening the tongue) increases the machine head and compression ratio at low and high flow rate conditions. Furthermore, many research authors have reported a significant reduction in noise levels and unbalanced aerodynamic forces when the tongue is moved away from the impeller.
A good amount of studies have been done to understand and quantify the role of these 5 major volute geometric parameters and other minor variables. But, further research work is needed to get further clarify and appreciation for the role of geometric parameters in influencing volute flows. With the availability of parametric modeling software like Caeses and component-specific structured meshing software like Gridpro, a multitude of geometric variants can be build and meshed in an automated environment and CFD simulations performed. With parametric modeling, the level of influence of each of the geometric parameters can be brought out more clearly and accurately. Maybe, after conducting such optimization modeling exercises by independent researchers and organizations, it should be possible to have some general consensus as to what an optimized volute would look like.
1.“Genetic Algorithm Optimization of the Volute Shape of a Centrifugal Compressor”, Martin Heinrich et al, International Journal of Rotating Machinery, Volume 2016, Article ID 4849025, 13 pages.
2. “Effect of inlet configuration and pulsation on turbocharger performance for enhanced energy recovery”, Jose Francisco Cortell Forés et al, PhD thesis, Department of Mechanical Engineering Imperial College London JUNE 2018.
3. “Experimental studies on volute-impeller interactions of centrifugal compressors having vaned diffusers”, Christos Georgakis, PhD Thesis, Academic year 2003.
4. “Unsteady behaviours of a volute in turbocharger turbine under pulsating conditions”, Mingyang Yang et al, JOURNAL OF THE GLOBAL POWER AND PROPULSION SOCIETY, 2017, 1: 237–25.
5. “An investigation of volute cross-sectional shape on turbocharger turbine under pulsating conditions in internal combustion engine”, Mingyang Yang et al, Energy Conversion and Management 105 (2015) 167–177.
6. “The Impact of Volute Aspect Ratio on the Performance of a Mixed Flow Turbine”, Samuel P. Lee et al, Aerospace 2017, 4, 56.
7. “Design and optimization of Turbo compressors”, C. Xu & R.S. Amano, WIT Transactions on State of the Art in Science and Engineering, Vol 42, 2008.
8. “Influence of various volute designs on volute overall performance”, Xiaoqing Qiang et al, Journal of Thermal Science Vol.19, No.6 (2010) 505−513.
9. “Numerical Calculation of the Three-Dimensional Swirling Flow Inside the Centrifugal Pump Volutes” Erkan Ayder, International Journal of Rotating Machinery, 9: 247–253, 2003.
10. “Design considerations for the volutes of centrifugal fans and compressors”, D Pan et al, Proc Instn Mech Engrs Vol 213 Part C, 1999.
11. “Investigation on Effect of Centrifugal Compressor Volute Cross-Section Shape on Performance and Flow Field”, Mohammad Mojaddam et al, Proceedings of the ASME Turbo Expo July 11-15, 2012, Copenhagen, Denmark.
12. “Influence of volute design on flow field distortion and flow stability of turbocharger centrifugal compressors”, Zhenzhong Sun et al, Proc IMechE Part D: J Automobile Engineering 1–11, 2018.
13. “Effect of Diffuser and Volute on Turbocharger Centrifugal Compressor Stability and Performance: Experimental Study”, H. Mohtar et al, Oil & Gas Science and Technology – Rev. IFP Energies nouvelles, Vol. 66 (2011), No. 5, pp. 779-790.
14. “Characterization of the Performance of a Turbocharger Centrifugal Compressor by Component Loss Contributions”, Nima Khoshkalam et al, Energies 2019, 12, 2711.
15. “Optimal design of the volute for a turbocharger radial flow compressor”, Mohammad Mojaddam et al, Proceedings of ASME Turbo Expo 2014: Turbine Technical Conference and Exposition GT2014 June 16 – 20, 2014, Düsseldorf, Germany.
16. “CFD Analysis of the Volute Geometry Effect on the Turbulent Air Flow through the Turbocharger Compressor”, Chehhat Abdelmadjid et al, TerraGreen 13 International Conference 2013 – Advancements in Renewable Energy and Clean Environment, Energy Procedia 36 ( 2013 ) 746 – 755.
17. “Influence of the volute on the flow in a centrifugal compressor of a high-pressure ratio turbocharger”, X Q Zheng et al, Proc. IMechE Vol. 224 Part A: J. Power and Energy, JPE968, July 2010.
18. “Genetic Optimization of Turbomachinery Components using the Volute of a Transonic Centrifugal Compressor as a Case Study“, Martin Heinrich et al, PhD Thesis, Faculty of Mechanical, Process and Energy Engineering of the Technische Universität Bergakademie Freiberg November 22, 2016.
19. https://blog.softinway.com/volute-design-in-axstream/
By subscribing, you'll receive every new post in your inbox. Awesome!
The post Designing Turbocharger Compressor Volutes appeared first on GridPro Blog.
This figure shows a simple but useful comparison for a real, non-trivial configuration: a Cessna 210 modified with a NASA Natural Laminar Flow (NLF) wing. The goal here is not “pretty CFD,” but practical validation for conceptual design work.
The Cessna 210 is not an academic “wing-only” case. It has a fuselage, wing-body junctions, tail surfaces, and the usual geometric complexity that shows up immediately when you try to run a 3D analysis.
With Stallion 3D, the gridding step is not a week-long detour. Automatic Cartesian gridding makes it practical to iterate on complex shapes without turning the meshing workflow into the main project.
The comparison includes experimental results from NASA Technical Paper 2772 (full-scale general aviation airplane equipped with an advanced NLF wing). That dataset provides a grounded reference for lift and drag trends over angle of attack.
In the plots, Stallion 3D tracks the experimental behavior well over the usable range. For conceptual work, this is the point: you want predictions that are directionally correct, quantitatively reasonable, and stable enough to support decisions.
A vortex-lattice model (via 3DFoil) is also included for the wing-tail configuration. As expected for an inviscid lifting model, it provides a fast, low-friction reference that helps “triangulate” the physics.
When the VLM curve brackets or parallels the experimental/CFD trends, it increases confidence that the configuration-level aerodynamics are being captured consistently (especially in the pre-stall regime where conceptual sizing happens).
Taken together, the four views in the figure (experiment + multiple computational models) provide a practical validation set:
This is the workflow I care about: reducing blind spots early. When multiple models (plus experimental data) tell a consistent story, you can move forward faster and spend your time on design choices instead of debating whether the analysis is “real.”
This same validation logic applies beyond the Cessna 210 example. Once the workflow is in place, it scales naturally to:
If you’re doing early-stage design and want “good physics quickly” on real geometries, this is the kind of comparison that matters.
Please visit Hanley Innovations for more information ➡️ https://www.hanleyinnovations.com
I used a deliberately vague prompt: “draw a picture of the best airfoil shape”. AI doesn’t know your mission requirements (Re, Mach, thickness constraints, lift target, stall margin, structure, manufacturing, etc.), so the output is always going to be a guess—but it’s still interesting what it “reaches for” when asked.
In this case, Gemini returned an airfoil that looks supercritical-ish: thicker mid-chord, flatter upper surface, and a sharper-ish trailing region. Is it “best”? No. But it’s a recognizable design intent: manage transonic pressure gradients and reduce wave drag.
Next, Copilot3D generated an STL from the concept image. Here’s the fun part: it didn’t produce a clean monoplane wing. It produced something closer to a biplane / joined-surface interpretation.
This is a good reminder that “image → CAD” isn’t a deterministic pipeline yet. The tool is inferring 3D structure from ambiguous cues—so you can get creative geometry even if you didn’t ask for it. That’s not a failure. It’s a feature (as long as you validate the aerodynamics).
Once the STL exists, you can stop debating what the shape “means” and just run it. I brought the CAD into Stallion 3D and solved at Mach 0.825. From there, the workflow becomes familiar: surfaces, pressure/Cp trends, and whatever integrated outputs you care about (lift, drag, moments).
The point isn’t that the AI created a production-ready aircraft. The point is that you can now move from an AI sketch to a solvable geometry to CFD-based insight fast enough to iterate.
If you want to run this experiment yourself, keep it simple:
One of the realities of engineering design is that many decisions are made before a detailed CFD campaign ever makes sense.
Landing gear placement, strut geometry, fairings, brackets, pylons, and similar components all introduce aerodynamic penalties. The question is usually not “what is the final answer?” but rather:
This is where Stallion 3D fits into the workflow.
The examples shown compare two landing gear configurations using Stallion 3D. The goal is not high-end turbulence modeling or mesh tuning. The goal is fast, consistent comparison between design options.
With Stallion 3D, design engineers can:
The solver and grid generation are automatic and repeatable, so changes in forces and moments reflect geometry changes, not meshing differences.
Stallion 3D is not intended to replace detailed CFD at later stages. Instead, it helps narrow the design space early so higher-fidelity tools are applied only when they add value.
For many projects, this reduces iteration time, cost, and dependence on limited CFD resources, while keeping decisions grounded in physics.
If you have questions about using Stallion 3D in your design process, feel free to reach out.
Learn more ➡️ https://www.hanleyinnovations.com/stallion3d.html
I ran a new quiet-supersonic study at Mach 1.45 and 55,000 ft using the built-in atmosphere tables and Cartesian solver in Stallion 3D. The goal was to reproduce and understand the kind of pressure distribution seen in the NASA X-59 QueSST demonstrator, which recently completed its first flight. The idea is the same: manage the shock pattern so the ground hears a soft “thump” instead of a sonic boom.
The simulation shows a controlled series of small compressions marching down the forebody rather than one big, coalesced shock. That’s exactly what quiet-supersonic shaping is about—spreading the pressure rise (Δp/Δx) gradually so the far-field signature becomes a sequence of gentle steps instead of a single N-wave.
At these flight conditions, the distributed shock train is similar to what the X-59 team reported during their low-boom configuration tests. It’s encouraging to see Stallion 3D’s Navier–Stokes solver naturally produce the same kind of flow behavior on a simple Cartesian grid.
Right behind the cockpit, a red-blue compression and expansion pattern forms where the fuselage grows into the wing root. This region is a classic challenge in supersonic design—where cross-section growth and lifting surfaces meet, shocks can thicken and contribute to secondary noise.
It’s good to see that Stallion 3D’s refinement zone resolves these local gradients clearly, without any hand-built body-fitted grid. The automatic cell concentration gives an accurate look at how geometry transitions affect both drag and acoustic signature.
The aft wing and tail surfaces are doing real aerodynamic work. The pressure remains mostly clean, but there are still distinct compression and expansion regions being shed downstream.
In low-boom design, the rear shaping is as important as the nose. The aft body determines how the pressure signature closes—the part that controls how the sonic waveform ends. That’s the part that often separates a “thump” from a “bang.”
The local grid density around the aircraft shows that the refinement box is working exactly as intended. It captures oblique shocks and shear layers efficiently, even at Mach 1.45, without requiring a fitted mesh.
From a numerical standpoint, this confirms that Stallion 3D’s Cartesian method is practical for supersonic concept studies—especially for early X-59-style configurations or general quiet supersonic transport layouts.
The run used true high-altitude conditions (55,000 ft, Mach 1.45) from the built-in atmosphere model. These are the same conditions typically quoted for quiet-supersonic cruise tests and community response research under NASA’s QueSST program.
That realism matters for both acoustics and aerodynamics. At these pressures and densities, thin, swept lifting surfaces behave differently than they do in low-altitude transonic tests.
This quiet-supersonic run demonstrates what Stallion 3D does best—showing real aerodynamic detail from first principles without external meshing or post-processors. The solver’s ability to capture distributed shocks, canopy interactions, and aft-body effects all in one pass makes it an effective tool for early design of low-boom aircraft like the X-59 QueSST.
It’s not about pretty colors; it’s about credible data at real flight conditions. The results show a clean, believable Mach 1.45 solution with controlled shock structure—the kind of solution that points the way toward practical, certifiable overland supersonic transport.
Do Fish Swim Like Multi-Element Airfoils?
In nature, a school of fish moves as a coordinated system. Each fish swims in the wake of another, taking advantage of pressure differences and induced flows that reduce drag and save energy. It’s a clean example of fluid mechanics at work — and not too different from how engineers design multi-element airfoils for high lift.
The image above shows a simulation created from fish-shaped outlines. The shapes were first traced as simple drawings and then captured using Airfoil Digitizer. Airfoil Digitizer lets you turn almost any outline — hand-drawn, scanned, or imported — into an analysis-ready shape. You are not limited to NACA airfoils or standard sections. If you can sketch it, you can analyze it.
After digitizing the shapes, I placed them together and ran a potential flow solution in MultiElement Airfoils. This solver computes the velocity and pressure field around multiple bodies at once, and shows how they interact. The colored contours represent pressure: blue for low (suction) regions and red for higher pressure. You can see how each “fish airfoil” changes the flow around its neighbors, very much like the interaction between a slat, a main wing, and a flap.
This is the interesting part: even with playful shapes, the physics is still there. You get wake shielding, suction peaks, and local acceleration in the gaps. That’s the same family of effects we care about in real applications — multi-element wings, hydrofoils, propeller/wing interference, and UAV control surfaces working close together.
The workflow here was:
1) Sketch or outline a shape
2) Capture it with Airfoil Digitizer
3) Arrange multiple elements and solve the flow in MultiElement Airfoils
4) Visualize pressure and interaction
It’s a fun demonstration, but also a serious one. Airfoil Digitizer gives you full control over the geometry. MultiElement Airfoils lets you study how multiple lifting surfaces behave together, not just one at a time. Together they make it easy to explore ideas, test concepts, and see the aerodynamics before you ever build a model.
Visit ➡️ https://www.hanleyinnovations.com
Best regards,
Patrick
For more information, please visit https://www.hanleyinnovations.com
I wrote about SGS models for LES about eight years ago. In that post, ILES was advocated for wall-resolved LES with dissipative high-order methods such as the discontinuous Galerkin, spectral difference, and flux reconstruction (FR) methods. Many groups, including us, showed that adding SGS models produced worse results than ILES compared to DNS results.
We did find an outlier earlier this year. An SGS model improved the results of LES of the turbulent channel flow with a 6th-order (P5) FR scheme. A further investigation leads to a deeper understanding of SGS models in LES. Here are the main findings:
A Perfect Celebration
On December 5-7, 2024, a symposium, Emerging Trends in Computational Fluid Dynamics: Towards Industrial Applications, was successfully held at Stanford University to celebrate the 90th birthday of CFD legend, Professor Antony Jameson. I am very grateful to Antony for giving Professor Chongam Kim and myself an opportunity to celebrate our 60th birthday in conjunction with his. Thus, the symposium is also called the Jameson-Kim-Wang (JKW) symposium.
An organizing committee led by Professor Siva Nadarajah (McGill University) and composed of Professors Chunlei Liang (Clarkson University), Meilin Yu (UMBC), and Hojun You (Sejong University) did a fantastic job in organizing a flawless symposium. The list of speakers includes the who's who and rising stars in CFD. A special shoutout goes to Professor Juan Alonso and the sponsors for their support of the Symposium. A photo of the attendees is shown in Figure 1. Some good-looking posters from the sponsors are shown in Figure 2.
Antony's many pioneering contributions to CFD have been well documented in the literature. His various CFD and design optimization codes have shaped the design of commercial aircraft for many decades. Several aircraft manufacturers told stories about Antony's impact. We look forward to the release of the Symposium videos next year.
Next, I'd like to touch upon my personal connection to Antony. I first heard of his name and his work in China from my graduate advisor, Academician Zhang Hanxin. I still recall reading his paper on the successes and challenges in computational aerodynamics. I believe I first met Antony at an AIAA conference when he came to my talk on conservative Chimera. I did not get an opportunity to introduce myself. Our 2nd meeting took place in China during an Asian CFD conference in 2000 where both of us were invited speakers. We sat at the same table with Charlotte (Mrs. Jameson) in a banquet. This time I was able to properly introduce myself.
Soon after that, we started collaborating on high-order methods, from spectral difference to flux reconstruction. I visited Antony's lab and co-organized his 70th birthday celebration at Stanford in late 2004. During a visit to his home, Antony shared his fascination on the aerodynamics of hummingbirds. I still recall receiving his phone call about proving the stability of the SD method with Gauss points as the flux points on a Saturday when I was at my son's soccer game!
The Symposium also gave me an opportunity to see many of my former students, some of whom I have not seen for more than two decades: Yanbing, Khyati, Prasad, Chunlei, Varun, Takanori, Meilin, Lei, Cheng, Feilin, Eduardo and Justin. It is very gratifying to hear their stories after so many years.
The Symposium concluded with an amazing banquet. My friend and collaborator, H.T. Huynh, did a hilarious roast of me and I cannot stop laughing the whole time. H.T. has the talent of a standup comedian. Everything went smoothly and we had a perfect symposium!
In the computation of turbulent flow, there are three main approaches: Reynolds averaged Navier-Stokes (RANS), large eddy simulation (LES), and direct numerical simulation (DNS). LES and DNS belong to the scale-resolving methods, in which some turbulent scales (or eddies) are resolved rather than modeled. In contrast to LES, all turbulent scales are modeled in RANS.
Another scale-resolving method is the hybrid RANS/LES approach, in which the boundary layer is computed with a RANS approach while some turbulent scales outside the boundary layer are resolved, as shown in Figure 1. In this figure, the red arrows denote resolved turbulent eddies and their relative size.
Depending on whether near-wall eddies are resolved or modeled, LES can be further divided into two types: wall-resolved LES (WRLES) and wall-modeled LES (WMLES). To resolve the near-wall eddies, the mesh needs to have enough resolution in both the wall-normal (y+ ~ 1) and wall-parallel directions (x+ and z+ ~ 10-50) in terms of the wall viscous scale as shown in Figure 1. For high-Reyolds number flows, the cost of resolving these near-wall eddies can be prohibitively high because of their small size.
In WMLES, the eddies in the outer part of the boundary layer are resolved while the near-wall eddies are modeled as shown in Figure 1. The near-wall mesh size in both the wall-normal and wall-parallel directions is on the order of a fraction of the boundary layer thickness. Wall-model data in the form of velocity, density, and viscosity are obtained from the eddy-resolved region of the boundary layer and used to compute the wall shear stress. The shear stress is then used as a boundary condition to update the flow variables.
During the past summer, AIAA successfully organized the 4th High Lift Prediction Workshop (HLPW-4) concurrently with the 3rd Geometry and Mesh Generation Workshop (GMGW-3), and the results are documented on a NASA website. For the first time in the workshop's history, scale-resolving approaches have been included in addition to the Reynolds Averaged Navier-Stokes (RANS) approach. Such approaches were covered by three Technology Focus Groups (TFGs): High Order Discretization, Hybrid RANS/LES, Wall-Modeled LES (WMLES) and Lattice-Boltzmann.
The benchmark problem is the well-known NASA high-lift Common Research Model (CRM-HL), which is shown in the following figure. It contains many difficult-to-mesh features such as narrow gaps and slat brackets. The Reynolds number based on the mean aerodynamic chord (MAC) is 5.49 million, which makes wall-resolved LES (WRLES) prohibitively expensive.
![]() |
| The geometry of the high lift Common Research Model |
University of Kansas (KU) participated in two TFGs: High Order Discretization and WMLES. We learned a lot during the productive discussions in both TFGs. Our workshop results demonstrated the potential of high-order LES in reducing the number of degrees of freedom (DOFs) but also contained some inconsistency in the surface oil-flow prediction. After the workshop, we continued to refine the WMLES methodology. With the addition of an explicit subgrid-scale (SGS) model, the wall-adapting local eddy-viscosity (WALE) model, and the use of an isotropic tetrahedral mesh produced by the Barcelona Supercomputing Center, we obtained very good results in comparison to the experimental data.
At the angle of attack of 19.57 degrees (free-air), the computed surface oil flows agree well with the experiment with a 4th-order method using a mesh of 2 million isotropic tetrahedral elements (for a total of 42 million DOFs/equation), as shown in the following figures. The pizza-slice-like separations and the critical points on the engine nacelle are captured well. Almost all computations produced a separation bubble on top of the nacelle, which was not observed in the experiment. This difference may be caused by a wire near the tip of the nacelle used to trip the flow in the experiment. The computed lift coefficient is within 2.5% of the experimental value. A movie is shown here.
![]() |
| Comparison of surface oil flows between computation and experiment |
![]() |
| Comparison of surface oil flows between computation and experiment |
Multiple international workshops on high-order CFD methods (e.g., 1, 2, 3, 4, 5) have demonstrated the advantage of high-order methods for scale-resolving simulation such as large eddy simulation (LES) and direct numerical simulation (DNS). The most popular benchmark from the workshops has been the Taylor-Green (TG) vortex case. I believe the following reasons contributed to its popularity:
Using this case, we are able to assess the relative efficiency of high-order schemes over a 2nd order one with the 3-stage SSP Runge-Kutta algorithm for time integration. The 3rd order FR/CPR scheme turns out to be 55 times faster than the 2nd order scheme to achieve a similar resolution. The results will be presented in the upcoming 2021 AIAA Aviation Forum.
Unfortunately the TG vortex case cannot assess turbulence-wall interactions. To overcome this deficiency, we recommend the well-known Taylor-Couette (TC) flow, as shown in Figure 1.
Figure 1. Schematic of the Taylor-Couette flow (r_i/r_o = 1/2)
The problem has a simple geometry and boundary conditions. The Reynolds number (Re) is based on the gap width and the inner wall velocity. When Re is low (~10), the problem has a steady laminar solution, which can be used to verify the order of accuracy for high-order mesh implementations. We choose Re = 4000, at which the flow is turbulent. In addition, we mimic the TG vortex by designing a smooth initial condition, and also employing enstrophy as the resolution indicator. Enstrophy is the integrated vorticity magnitude squared, which has been an excellent resolution indicator for the TG vortex. Through a p-refinement study, we are able to establish the DNS resolution. The DNS data can be used to evaluate the performance of LES methods and tools.
Figure 2. Enstrophy histories in a p-refinement study
Happy 2021!
The year of 2020 will be remembered in history more than the year of 1918, when the last great pandemic hit the globe. As we speak, daily new cases in the US are on the order of 200,000, while the daily death toll oscillates around 3,000. According to many infectious disease experts, the darkest days may still be to come. In the next three months, we all need to do our very best by wearing a mask, practicing social distancing and washing our hands. We are also seeing a glimmer of hope with several recently approved COVID vaccines.
2020 will be remembered more for what Trump tried and is still trying to do, to overturn the results of a fair election. His accusations of wide-spread election fraud were proven wrong in Georgia and Wisconsin through multiple hand recounts. If there was any truth to the accusations, the paper recounts would have uncovered the fraud because computer hackers or software cannot change paper votes.
Trump's dictatorial habits were there for the world to see in the last four years. Given another 4-year term, he might just turn a democracy into a Trump dictatorship. That's precisely why so many voted in the middle of a pandemic. Biden won the popular vote by over 7 million, and won the electoral college in a landslide. Many churchgoers support Trump because they dislike Democrats' stances on abortion, LGBT rights, et al. However, if a Trump dictatorship becomes reality, religious freedom may not exist any more in the US.
Is the darkest day going to be January 6th, 2021, when Trump will make a last-ditch effort to overturn the election results in the Electoral College certification process? Everybody knows it is futile, but it will give Trump another opportunity to extort money from his supporters.
But, the dawn will always come. Biden will be the president on January 20, 2021, and the pandemic will be over, perhaps as soon as 2021.
The future of CFD is, however, as bright as ever. On the front of large eddy simulation (LES), high-order methods and GPU computing are making LES more efficient and affordable. See a recent story from GE.

Author:
Gopal S.
Engineer II – Marketing
With its truly autonomous meshing capabilities CONVERGE automates the most time-consuming grid generation process for you. It not only simplifies your computational fluid dynamics (CFD) simulation workflow but also accelerates it. However, in CFD, problems rarely come with straightforward solutions. Despite the advantage, challenges in pre- and post-processing can still slow down one’s analysis. Hence, to simplify things further, CONVERGE CFD software offers a wide array of tools with detailed documentation, all integrated into a single, user-friendly GUI, CONVERGE Studio. In this blog, we will explore the various CAD geometry preparation and manipulation tools integrated into CONVERGE Studio, which transform raw geometric data into simulation-ready models.

Geometries in CFD simulations need to be watertight and hence involve some level of cleanup. In CONVERGE, these watertight geometries need to be supplied as surface files. Historically, geometries could only be imported as triangulated surface files in stereolithography format (.stl) for pre-processing in CONVERGE Studio. This added an extra step of exporting the geometry in .stl using the CAD tool in which the geometry was created. Although a simple step, it affected engineers who were obtaining geometries from dedicated CAD teams at their companies. However, this changed over time, allowing users to directly import most CAD files in their native format that are automatically triangulated upon loading. Once imported, these surface files need to be cleaned to meet the required quality standards. CONVERGE Studio is equipped with all the necessary tools and utilities to identify geometry defects, clean, and, if required, modify geometries for simulations. Figure 2 highlights some of the geometry cleanup and diagnosis options in CONVERGE Studio.

Most of the surface manipulation tools are available for free when you purchase a CONVERGE license. For more complex and sophisticated surface preparation, CONVERGE Studio offers the Polygonica toolkit, which can be integrated and used with an add-on license, allowing you to perform more complex operations like coarsening over-refined surfaces (triangles), surface reconstruction, automated surface healing, etc., on your geometries. The toolkit also allows specifying a batch of surface repair operations with Polygonica Job Launcher that can be applied to an imported geometry.
In addition to the above-mentioned utilities, for all the engine specialists who choose to simulate a sector of their engine geometries, CONVERGE Studio offers a utility to quickly create a properly prepared sector geometry based on a piston profile and a few geometric inputs.
WIth the official release of CONVERGE 5 a new CAD Editor module was introduced in CONVERGE Studio. Unlike the above module where the geometry preparation was performed on a triangulated geometry, this new module is based on the CATIA Geometry Modeling kernel, which uses the boundary representation (B-Rep) system. It is a more intuitive way of building and manipulating CAD geometries and is the most commonly used technique in several CAD softwares. Unlike STL files, B-Rep geometries represent a 3D object by a collection of surfaces that distinguishes the boundary between the interior and exterior of a solid. A big advantage of B-Rep is that it allows for non-manifold (over-lapping surfaces) sheets that are bounded by edges, which is particularly helpful for geometries with several interfaces (like battery packs). Figure 3 shows the CAD Editor interface in CONVERGE Studio.

In the CAD Editor module you can manipulate CAD geometries without the need to triangulate the surface first. The tool allows you to generate custom surface meshes for certain parts of the geometry, which can be further validated using the diagnosis tool available within the module. Once the geometry is prepared, you can directly transfer the geometry from the CAD Editor module to the Case Setup module to set up the case for simulation.
With a user-friendly interface, and comprehensive geometry manipulation tools, our team is taking the best efforts to simplify and unify all of your CAD preparation and pre-processing operations under a single umbrella. Being an essential tool, capabilities in CONVERGE Studio are being continuously evolved! Do not hesitate to connect with us to learn more about CAD pre-processing in CONVERGE Studio or to learn about the CONVERGE package as a whole.
Co-Author:
Allie Yuxin Lin
Marketing Writer II
Although fuel cells have recently gained prominence in today’s energy discourse, their conceptual origin dates back to the 19th century. In 1842, British scientist William Grove invented the first fuel cell, naming it a “gas battery.” For nearly a century, this curious invention would sit quietly in the scientific sidelines until the early 1930s, when English engineer Francis Bacon revisited Grove’s idea. Over the next two and a half decades, Bacon worked on an alkaline electrolyte fuel cell, which consumed pure oxygen and hydrogen. In 1959, his team revealed the “Bacon cell,” a six-kW prototype that was the first fuel cell powerful enough for practical use, setting a new benchmark for real-world energy applications and laying the foundation for modern fuel cell technology.
Fuel cells are electrochemical devices that convert the chemical energy of a fuel, such as hydrogen, and an oxidant, such as oxygen, directly into electrical energy. They are similar to batteries in that they produce electricity, but unlike batteries, they don’t need to be recharged, as long as a fuel source is provided. As such, fuel cells offer a clean energy alternative when used with renewable fuels, producing electricity with few emissions (i.e., water and heat).
However, these devices are notoriously difficult to model because they involve a complex interplay of physical, chemical, and electrical processes that occur simultaneously across multiple spatial and temporal scales. To function, fuel cells require electrochemical reactions, which are sensitive to variables like humidity, temperature, and pressure. Accurately capturing these reactions requires detailed modeling of mass transport, charge transfer, and heat management. Further, fuel cells often contain porous media, such as gas diffusion and catalyst layers, where multi-phase flow occurs.
With its autonomous meshing capabilities, CONVERGE can effectively capture the complexity of modern fuel cell geometries. Conjugate heat transfer (CHT) modeling in CONVERGE can be used to calculate the heat transfer throughout the fuel cell stack to locate regions of low or high temperature. Additionally, CONVERGE’s multi-phase modeling can simulate the flow of liquids and gases in the reactant supply channels and gas diffusion layers, which are represented as porous media. This can help fuel cell manufacturers predict local water content and simulate liquid water transport, which are important for evaluating the performance of the fuel cell. The fully coupled solution of electrochemistry, multi-phase fluid dynamics, and heat transfer in CONVERGE allows engineers to study the activation and mass transport losses in fuel cells, which can degrade cell performance.
At Convergent Science, we’re committed to pushing the boundaries of what our code can do, tackling new challenges and refining our tools with each new release. Our latest features overcome the challenges of fuel cell modeling, making our simulations sharper, faster, and more powerful than yesterday. Let’s dive into two case studies that showcase CONVERGE’s cutting-edge new features and how they’re driving innovation in fuel cell modeling.
Fuel cell performance can be heavily influenced by flow field design (i.e., the pattern of channels that direct gases across the cell’s surface). Different designs will affect how well reactants are distributed, how water and heat are managed, and ultimately, how efficiently the cell operates. Parallel flow fields use straight, side-by-side channels that offer low resistance and are easy to manufacture, but they can lead to uneven gas distribution and water buildup. Radial flow fields spread reactants from a central inlet, promoting uniform coverage. These designs are typically used in compact or round fuel cell geometries. One of the most popular and effective fuel cell designs is the serpentine flow field. In these fuel cells, the flow field for the gas channels is designed in a serpentine pattern, which ensures uniform gas distribution, enhances water management, and provides better heat transfer. These cells are especially useful in industries like automotive, aerospace, and portable energy, where reliable performance and compact design are critical. However, simulations of such devices are difficult due to the non-linear conjugate heat transfer, moving fluid flow, electric potential equations, and complex electrochemistry.
In this steady-state simulation, we used CONVERGE to simulate a serpentine fuel cell with hydrogen fuel to study the transport of reactants at different voltages. The geometry of a 50 cm2 cell with a five-path serpentine bipolar plate was derived from an experimental study.1 Both the mass flow rate of H2 at the anode inlet and O2 at the cathode inlet fluctuated with the applied voltage.
CONVERGE’s fully autonomous meshing easily handled the complex geometry of this case, and fixed embedding was applied around the catalytic and membrane layers for additional mesh refinement. The total cell count was 2.5 million, and the simulation was run with 24 cores.
We used CONVERGE’s pseudo-transient steady solver, which reformulates the steady-state problem into an equivalent transient problem by adding an artificial time derivative to the governing equations. This allows the solution to evolve over “psuedo-time” until it reaches a steady state, which can be faster than a true transient or direct steady-state simulation.
For this direct current (DC) application, we employed the 3D electric potential solver, which predicts the electric potential, current field distributions, and associated Joule and electrochemical heat generation. When this is activated, CONVERGE solves for an electric potential solution within solid streams and porous media volumes with nonzero electrical conductivity. In doing so, CONVERGE accounts for ohmic heat dissipation (i.e., Joule heating).
CONVERGE accurately predicted the response of the fuel cell to applied voltages and reproduced three different polarization curves (activation polarization, ohmic polarization, and concentration polarization). These curves represent different types of voltage losses that can impact fuel cell performance.

Proton exchange membrane (PEM) fuel cells, which are also known as polymer exchange membrane fuel cells, work by splitting hydrogen into protons and electrons, which generates an electric current. PEM cell performance depends on tightly balanced electrochemical and transport processes, making these devices sensitive to variables such as temperature, pressure, porous media, species’ concentrations, and charge transfer coefficients.
Understanding what effect these operating conditions have on cell performance is key to improving fuel cell stability and efficiency. We carried out a sensitivity study on a simplified PEM fuel cell model to identify the most critical parameters and explore mitigation strategies.

CONVERGE assumes laminar flow and captures multi-phase flow with the evaporation and condensation models. Conjugate heat transfer modeling is applied on the cell membrane to capture conduction and convection.
At the cathode level, we applied the lumped electrochemistry model, which is currently implemented in CONVERGE as a user-defined function (UDF). The name “lumped” comes from the fact that the electrical resistance used to compute the current density is obtained with a “lumped” sum of the electrical resistance in all PEM layers (i.e., the membrane, the anode and cathode catalyst layers, the gas diffusion layers, the micro-porous layers, and the bipolar plates). This simplified 0D approach allows us to solve a simpler algebraic nonlinear equation for current density at each computational cell instead of a full 3D differential equation. After replacing a single equation with a series of smaller nonlinear algebraic equations, we can begin solving iteratively. In this way, our simulation still reaps the benefits of a full 3D model, but only incurs the cost of solving algebraic nonlinear equations, resulting in faster turnaround for fuel cell simulations.
The Nernst equation calculates the cell potential of an electrochemical cell and shows how changes in reactant and product concentrations alter the cell’s voltage. According to this equation, increasing pressure would increase the cell potential. In our model, we increased the pressure by 0.5 bar on both the anode and cathode sides of the PEM fuel cell, which immediately increased the power output from the device. By increasing the pressure of the system, we increased the availability of reactant species, which offsets the limited current density and results in higher voltage output.
The main chemical reaction in a PEM fuel cell occurs at the membrane, when hydrogen reacts with oxygen to generate an electric current and water. However, in practice, industrial fuel cells typically supply air to the cathode, which only contains about 17-20% oxygen. As a result, oxygen depletion at the reaction site can lead to activation and concentration overpotentials. In our model, we used CONVERGE to generate the polarization curves of the fuel cell under three O2 concentrations: air-like (17.5% O2), O2-intermediate (37.5% O2), and O2-rich (70% O2). We found that as the oxygen concentration increased, so did the power output.

Fuel cell technology, which has certainly come a long way since its inception in the mid-19th century, represents the promise of efficient, sustainable energy. However, realizing that promise on a global scale requires overcoming engineering challenges in design, optimization, and operation. CONVERGE’s suite of state-of-the-art computational tools provides engineers and researchers the ability to simulate complex fuel cell processes with accuracy and efficiency. Thank you to William Grove and Francis Bacon for pioneering this revolutionary technology and setting the foundation for progress. Now, our tools can help shape the next chapter of fuel cell development, contributing to a greener future.
[1] Iranzo, Alfredo, et al. “Numerical Model for the Performance Prediction of a PEM Fuel Cell. Model Results and Experimental Validation.” International Journal of Hydrogen Energy, 35(20), 2010, 11533–11550. https://doi.org/10.1016/j.ijhydene.2010.04.129
The investigation of near-critical state fluid jets is an important problem for various engineering applications such as propulsion and thermal systems. In these contexts, ejectors are used to convert flow work into kinetic energy and, ultimately, into a pressure lift in various systems, including gas turbines, liquid propulsion systems, and refrigeration systems. The ejector operating principle relies on a high-speed jet in single- or multi-phase conditions. The efficiency of ejector devices depends on the physics of the jet, especially under multi-phase operations.
Ejector components are used in various engineering applications as expansion recovery devices. Specifically, ejectors are flow devices that convert kinetic energy into pressure recovery. Different types of ejectors exist based on the application. For example, ejectors used for gas turbine cooling expand high-pressure gas with mixing gas-phase fluid, which increases volumetric efficiency of the combustor; ejectors used in refrigeration systems expand the liquid-phase of fluid with mixing gas-phase fluid, developing a two-phase fluid and decreasing compressor work input. Figure 1 presents a schematic diagram of a multi-phase ejector. The inlet of an ejector, often known as “motive”, contains liquid at high pressure, and the suction contains vapor phase of the same or a different fluid. High-pressure liquid that flows out of the motive throat induces a negative pressure gradient in the suction throat by increasing kinetic energy, which develops a suction effect on the vapor flowing through the suction inlet. These distinct vapor and liquid flows mix downstream through the mixing zone and expand in the following diffuser zone, increasing the pressure.
Ejectors are not only used in refrigeration systems, but they are also widely applied in oil and natural gas systems for waste gas recovery processes and in gas turbines to enhance cooling performance by improving compressor entrainment efficiency.
Improving the design and operational performance of ejectors in a refrigeration system is linked to reduction of entropy generation. Entropy production restricts the coefficient of performance (COP) of the system from further improvement. Local exergy analysis of an ejector operating with carbon dioxide (CO2) as the fluid in a two-phase regime shows that entropy generation in the mixing zone is 2.92 times higher than in the diffuser zone. High entropy generation in the mixing zone is linked to a turbulence evolution mechanism at the shear layer of the jet, where entropy generation is related to turbulence length scales. The operation of ejectors relies on the physics and control strategy of the shear layer (for single phase) and the liquid-gas interface (for multi-phase), putting restrictions on improving the COP of the system. Therefore, understanding the evolution of shear layer turbulence and the mechanism of liquid-gas interface instabilities on the jet inside ejectors could provide new insights to decrease entropy generation and maximize the COP of the system.

This current research, in collaboration with a technical team from Bechtel, aims to understand the various stages inside the ejector to identify pathways to improve the ejector efficiency. The ejector of interest is a liquid-vapor variable-geometry CO2 ejector, as shown in Figure 1. The flow inside the ejector comprises a subcooled jet, which is the primary energy input to the ejector; a gaseous suction flow, which increases the cooling capacity of the ejector cycle through work recovery; a mixing zone, where entrainment of the suction flow into the motive flow occurs; and a diffuser, which increases pressure and reduces the work required by a compressor. To conduct the analyses, a high-fidelity computational fluid dynamics (CFD) model is needed to resolve the boundary layers and interphase phenomena.
The computational domain of the ejector, shown in Figure 2,1,2,3,4 is modeled in cylindrical coordinates with axial (x), radial (r), and azimuthal (θ) directions. The domain includes the motive inlet (x/d = −13), suction inlet (−11 ≤ x/d ≤ −8), diffuser outlet (x/d = 23), and adiabatic no-slip walls. Boundary conditions are assigned based on experimental data. At the motive inlet, pressure, temperature, and CO2 mass fraction are prescribed, and the inlet gap is tuned to match the measured mass flow rate. The suction inlet is defined by its mass flow rate, temperature, and CO2 composition. The outlet pressure is fixed, with no backflow allowed, and all the walls are treated as adiabatic with a no-slip boundary condition.

CONVERGE CFD software provides a robust platform for simulating complex, unsteady, multi-phase flows with minimal manual meshing. In this study, CONVERGE is used to solve the three-dimensional compressible Navier-Stokes equations coupled with phase transport and large eddy simulations (LES) for CO2 ejector flows. Thermophysical and transport properties are sourced directly from the NIST database, enabling accurate modeling of real-fluid behavior across a wide range of thermodynamic states.
A key advantage of CONVERGE is its automatic cut-cell meshing, which accurately resolves complex geometries without requiring a user-generated mesh. This feature also enables box filtering for LES, ensuring that a large portion (≥80%) of turbulent kinetic energy is resolved. Furthermore, CONVERGE provides full control over subgrid-scale (SGS) models and constants, offering users flexibility comparable to that of in-house CFD codes.
Advanced grid control features include region-based embedding and Adaptive Mesh Refinement (AMR). Embedding refines the mesh locally (down to 0.125 mm), while AMR dynamically adapts grid resolution during the simulation (as fine as 0.0156 mm) based on gradients in velocity, temperature, and phase fraction. This results in a highly detailed, physics-driven mesh (60 million cells) that adapts to flow evolution without remeshing.


Although CONVERGE does not include built-in verification tools, the simulation results have been rigorously validated following the ASME V&V 20 standard. Grid convergence studies reveal negligible numerical uncertainty (≤0.13%), and a comparison with experimental data confirms the model’s predictive capability, with model error bounds of 2.5% ±2.66% for mass flow rate and 0.72% ±1.09% for suction pressure.
Lastly, spectral analysis of turbulent kinetic energy shows a clear inertial subrange with κ−5/3 scaling, confirming that the LES approach and discretization schemes successfully capture the dominant energy transfer mechanisms. Overall, CONVERGE enables high-fidelity simulations of multi-phase, turbulent flows with exceptional automation and accuracy.


The behavior of the motive jet in an ejector is governed by the turbulent structures that develop along the liquid-gas interface, directly influencing flow entrainment. Four distinct regimes are identified based on dominant physical mechanisms:

These regimes coexist and interact within the ejector. The instantaneous jet morphology, shown in Figure 7, is visualized using the spatial distribution of the density ratio ρCO2(g)/ρCO2(l). Grayscale shading ranges from dark (low gas-liquid density ratio) to white (high ratio), indicating interface transitions. The evolution of turbulent coherent structures at the interface is crucial for entrainment performance (m˙ s/m˙ m). The motive jet, an annular co-axial flow, is wall-bounded and subject to a streamwise adverse pressure gradient. Vorticity dynamics drive interface deformation:
In regime R2, Kelvin–Helmholtz instability (KHI) leads to the formation of ring vortices, supported by ωθ from regime R1. Azimuthal instabilities induce periodic bulges in these rings, forming counter-rotating vortex pairs around the jet shear layer. These ωx structures continue to stretch and intensify due to angular momentum conservation, leading to thinner, more energetic vortex formations (Figure 8).1
A detailed understanding of jet morphology within the mixing zone of an ejector is essential for the development of next-generation ejector designs. From a thermodynamic perspective, this region is the primary source of exergy destruction, and its optimization presents an opportunity for significant performance improvements. In this study, the jet morphology has been categorized into distinct flow regimes based on the dominant underlying physics. This regime-based classification lays the groundwork for the development of low-order models that capture only the most relevant physical phenomena, thereby enabling faster and more efficient computational strategies.

Ultimately, this physics-informed modeling approach is expected to accelerate the shape optimization of ejectors across a wide range of applications. The use of CONVERGE CFD software has significantly streamlined this process through its advanced features, particularly automatic meshing and granular numerical control, which align with the company’s guiding principle: “Never Make a Mesh Again.”
For an in-depth discussion of the methodologies and findings, please refer to the following articles:
This research has been funded by the National Bechtel Corporation, USA. We would like to acknowledge the continued support from Mr. David Ladd, Dr. Leonard J. Peltier, and Prof. Ivan C. Christov. The authors would also like to thank Convergent Science Inc. for providing an academic license and technical support through their CONVERGE Academic Program.
Bhaduri, S., Peltier, L.J., Ladd, D., Groll, E.A., and Ziviani, D., “Regimes of a Decelerating Wall-Bounded Multiphase Jet Inside Ejectors,” Physics of Fluids, 37, 2025. DOI: 10.1063/5.0278015
Co-Author:
Allie Yuxin Lin
Marketing Writer II
A few years ago, I lived in a small suburban neighborhood in Portland, Oregon. More than once, as I was driving at a leisurely pace of 30 mph down a local road, someone would whiz by me at an outrageously high speed. While they probably weren’t going at 100 mph (as I would passionately claim to my passenger), it certainly felt like it.
Today, I work at a company that deals with modeling combustion, and that experience is how I taught myself the concept of the deflagration to detonation transition (DDT). If, in some dystopic universe, my reality and the speedster’s reality were merged into one, that new car would be going steady at 30 mph and then suddenly accelerating to 100 mph in under a second, theoretically experiencing DDT.
DDT is defined as the process where a slow-moving flame (i.e., my car) rapidly accelerates to a supersonic detonation wave (i.e., the speedster’s car). The microseconds leading up to DDT are known as flame acceleration (FA), and these phenomena are typically studied together. Conventionally, FA and DDT are studied in large-scale settings such as supernova explosions, large shock tubes, or coal mine passages. However, emissions regulations and the rising demand for more compact energy systems have also motivated their study in much smaller settings such as microchannels. These devices offer enhanced heat and mass transfer with lower manufacturing costs and are used in a variety of applications, including electronics cooling, biological systems, and HVAC devices. However, combustible fuel mixtures are more prone to detonating when passing through the highly confined passageways of microchannels, which are similar in size to the diameter of a single strand of hair. Studying FA and the ensuing DDT in microchannels can increase our understanding of the conditions that trigger detonation and enable better control and mitigation strategies in high-pressure systems.
Much of the existing literature on explosion safety has centered on investigating the effect of thermal wall boundary conditions, which play a significant role in flame propagation by affecting heat loss, flame stability, and ignition behavior. Another factor that can influence flame propagation and detonation is heterogeneous chemistry, in particular, surface reactions at catalytic walls. In micro-reactors, reactive catalytic wall coatings can alter and induce chemical exchange at the wall, affecting the FA and DDT process. Catalytic walls provide a surface on which fuel/air mixtures can react; this heterogeneous combustion takes place on the catalyst surface, rather than in the gas phase. The bulk of the catalytic combustion literature has focused on catalytic combustion over noble metals such as platinum or rhodium. By contrast, transition metals like nickel have only been studied for chemical reforming, a process that alters the molecular structure of hydrocarbons to produce other chemicals. In this study, Suryanarayan (Surya) Ramachandran, a Ph.D. candidate at the University of Minnesota Twin Cities, teamed up with Professor Suo Yang and research engineers at ExxonMobil Technology. They examined hydrogen ignition and flame propagation in a microchannel with catalytic nickel walls, where the highly confined environment of the microchannel prompted additional concerns of FA and DDT.1 I’ll hand it over to Surya to tell us about his research!
Co-Author:
Suryanarayan Ramachandran
Ph.D Candidate,
University of Minnesota
In an ideal hydrogen combustion system, the fuel/oxidizer mixture would consist of hydrogen, with oxygen and nitrogen coming from the air. In industrial settings, some combustion products, such as water, may make their way back into the fuel/oxidizer mix. As a result, the mixture becomes highly vitiated with H2O. To mimic a realistic combustion scenario, we simulated combustion with the mixture of hydrogen, oxygen, nitrogen, and water. This mixture, which is named case C1, showed no detonation.
This didn’t really answer any of our questions, since our research group set out to understand DDT. The C1 case didn’t show any detonation, so we wanted to figure out why it didn’t explode and if there would be another mixture that would actually show some kind of detonation. So, I thought, why not remove the water? The water isn’t really contributing to combustion or heat release; rather, it’s acting as a diluent. Plus, it has a high specific heat capacity, which means it pretty much acts like an energy sink by sucking away the heat release and reducing the overall flame temperature. By removing the water, we were left with a mixture of pure hydrogen and dry air, which we called C1d. C1d has nitrogen acting as the diluent in the mixture, but no vitiation (i.e., no water vapor). To evaluate other interactions and gather some comparison data, we also tested a H2/O2 mixture; this final variation was called C1p.
Since we wanted to study the influence of both gas-phase (homogeneous) and surface (heterogeneous) chemistry on the FA & DDT process, we decided to use CONVERGE for the CFD part of this study. The kind of detonation problems that we are studying require highly resolved meshes and Adaptive Mesh Refinement (AMR) to capture the flame front. In that sense, CONVERGE was the ideal choice, since it has the high-quality meshing capabilities we needed, as well as the option to include coupled homogeneous and heterogeneous surface chemistry.
To begin, we used CONVERGE to solve the governing multi-component reacting Navier-Stokes equations, accomplished through a collocated finite volume method (FVM), which conserves mass, momentum, total energy, and the species’ mass-fractions on a discretized mesh consisting of many cells. The velocities at the cell faces were obtained using a blended central and upwind scheme (i.e., the flux-blending scheme), where cell-face velocities represent weighted sums of upwinded (i.e., first-order accurate) and cell-averaged (i.e., second-order accurate) velocities. The Pressure Implicit with Splitting of Operators (PISO) scheme was employed to capture pressure-velocity coupling, while the Rhie-Chow interpolation scheme was used to avoid potential “checkerboarding” issues with the collocated grid.2 CONVERGE’s biconjugate gradient stabilized (BiCGSTAB) linear solver was used for the pressure Poisson equation, a reformulation of the Navier-Stokes equations that allowed us to directly calculate pressure by decoupling pressure from the velocity field. Additionally, we used the SAGE detailed chemical kinetics solver to solve the gas-phase and surface combustion reactions. SAGE solved the surface coverages and gas-phase mass fractions, enabling coupled gas-phase/surface reactions at the wall.
CONVERGE’s AMR helped us refine the mesh in areas of greater computational complexity and coarsen the mesh in others. We chose not to use AMR in Case C1, due to the large flame thickness (δf= 700μm). For the purposes of this study, cells were refined according to the local cell temperature. To ensure finer meshes on the accelerating flame front, we only employed AMR when the cell temperature fell in the range of 800-1900 K. For Case C1d, we applied AMR on top of the base mesh resolution to ensure six cells spanned the small flame thickness (δf= 27μm). The final mixture, Case C1p, had an even smaller flame thickness of δf = 20 μm, so we further refined the mesh to achieve 16 points across the flame thickness, ensuring adequate resolution of the flame structure.
Next, we performed several validation studies for CONVERGE’s gas-phase and surface chemistry mechanisms to enhance confidence in our simulation results. For example, CONVERGE’s gas-phase SAGE detailed chemistry solver and its hydrodynamic coupling was compared with results from the PeleC solver, an open-source CFD code used for combustion applications. Validation results are shown in Figure 1.

CONVERGE’s surface chemistry module was validated against Chen et al.3, a well-cited paper that simulates a catalytic micro-tube with gas-phase and surface reactions for premixed H2/air mixtures. This publication described a simple catalytic combustion study focusing on flame stabilization, rather than FA/DDT. CONVERGE’s results matched well with those of the paper.1
In Case C1, the flame did not exhibit acceleration, nor did it become a detonating flame. Rather, it simply propagated with a constant flamespeed. However, compared to the traditionally observed parabolic-like flame front profile, the flame inverted whenever surface chemistry was active (i.e., when the chemical reactions at the surface were explicitly modeled and accounted for), as seen in Figure 2. This reflects the preferential propagation of the flame along the walls due to catalytic surface chemistry.

However, when surface chemistry was disabled, the flame returned to the traditional parabolic shape, as shown in Figure 3.

After finding a strong production of the intermediate radicals OH and O along the wall surface, we concluded that catalytic surface reactions promote preferential propagation of the flame via the production of reactive intermediates that directly promote gas-phase combustion. In other words, the flame propagates along the catalytic walls due to the surface reactions from the fuel/oxidizer mixture and the intermediate radicals. We also found the temperature distribution for the C1 cases run with surface chemistry were higher than the ones run with gas-phase chemistry only. This is likely due to the fact that surface chemistry calculations take into account additional heat generated by surface reactions.
In all C1 cases, the flame did not exhibit acceleration. This is attributed to the presence of diluents and vitiation in the mixture, which lowers the flamespeed and inhibits FA/DDT.4 Therefore, the same simulation and analysis procedure was carried out for the C1d mixture. In this case, removing water from the mixture led to higher flamespeeds and FA, but not DDT. In contrast with the vitiated cases (C1), the flame inversion occurred only for the case where surface chemistry is enabled without gas-phase chemistry. In cases with gas-phase reactions, the flame became parabolic. The flame in all C1d cases accelerated to high speeds (i.e., around Mach 0.1). Unlike case C1, there was no flame propagation along the wall since the short residence time (i.e., the time available for surface chemistry to couple with gas-phase chemistry) reduced the effect of catalytic walls. The C1d cases exhibited rapid FA, but did not reach DDT. We believe this is due to the long DDT run-up distance (i.e., the distance required for the flame to undergo the DDT process). On the other hand, the C1p cases exhibited rapid DDT after forming a tulip-like flame front in the initial stages. Both flame branches propagated preferentially along the wall before eventually uniting, forming a detonation front, as shown in Figure 4.

Thanks, Surya! To recap, Surya and his team, along with researchers from ExxonMobil Technology, used CONVERGE to simulate the propagation and acceleration of H2/O2 and H2/air flames for three different fuel mixtures over catalytic nickel walls. Each mixture responded differently to the interplay between surface and gas-phase chemistry, resulting in varying outcomes in terms of FA and DDT. Read more about Surya’s research in his paper!
Overall, this study was the first in the field to consider coupled gas-phase and surface reactions in catalytic nickel microchannels for assessing DDT. These findings have the potential to drive more specific studies tailored to industrial scenarios to improve explosion safety.
[1] Ramachandran, S., et al. “Flame Acceleration and Deflagration to Detonation Transition in a Microchannel with Catalytic Nickel Walls.” Physics of Fluids, 36(11), 2024, 116-143. https://doi.org/10.1063/5.0235540
[2] Zhang, S., Zhao, X., and Bayyuk, S., “Generalized Formulations for the Rhie–Chow Interpolation.” Journal of Computational Physics, 258, 2014, 880–914. https://doi.org/10.1016/j.jcp.2013.11.006
[3] Chen, G.-B., et al. “Effects of Catalytic Walls on Hydrogen/Air Combustion inside a Micro-Tube.” Applied Catalysis A: General, 332(1), 2007, 89–97 https://doi.org/10.1016/j.apcata.2007.08.011 [4] Ramachandran, S., Srinivasan, N., Wang, Z., Behkish, A. and Yang, S., “A Numerical Investigation of Deflagration Propagation and Transition to Detonation in a Microchannel With Detailed Chemistry: Effects of Thermal Boundary Conditions and Vitiation.” Physics of Fluids, 35(7), 2023, https://doi.org/10.1063/5.0155645
Author:
Elizabeth Favreau
Marketing Writing Team Lead
It’s hard to beat the thrill of a NASCAR race. The roaring of engines as cars careen around the track as mere blurs, the deafening cheers of the fans, the animated voices of the announcers booming over the din. The atmosphere is electric, and excitement is palpable in the air as cars flash across the finish line.
Guided by the deft hands of the drivers, the race cars are propelled by powerful engines to mindboggling speeds—exceeding 200 mph on some tracks. The engine is the heart of the car, and it can easily make or break a race. Even minor tweaks to the engine can provide the small boost of power needed to best the competition.
Figuring out what tweaks to make, however, is not always easy. Exploring many different designs can be expensive, not just in terms of money, but also time—and time is a highly valued commodity in the racing world. With dozens of races each season, and each one in need of a specialized engine, being able to efficiently assess different design options is key.

Roush Yates Engines designs, tests, and builds purpose-built race engines for the NASCAR Cup Series and the NASCAR Xfinity Series. Founded in 2004 and headquartered in North Carolina, Roush Yates is the exclusive engine builder to Ford Performance. With nearly 400 wins across the two NASCAR series, Roush Yates is regularly powering cars to victory and championships. So how do they do it? In addition to state-of-the-art test facilities and a team of brilliant engineers and technicians, incorporating advanced modeling software like CONVERGE into their design process is one of their key strategies for winning.
Designing racing engines is obviously a different beast than designing engines for everyday passenger vehicles. Each engine must be tailored to the specific tracks where it will be raced, with the goal of eking out every bit of performance possible. To achieve this, you need to consider a variety of factors, including the length of the track (typically ranging from 1/2 mile to over 2 miles), the vehicle traction available, differences in driver style, climate conditions, and even elevation.
“It’s very interesting to design for those types of different environments to make sure we’re doing the most we can to bring the best engine we can to each track,” says Jamie McNaughton, Technical Director at Roush Yates Engines.
Power isn’t the only necessity in a racing engine, either; the engines also need to be durable. While these engines won’t be racking up hundreds of thousands of miles, they need to be at peak performance while being driven under extreme conditions for up to three races and numerous practice sessions, which can add up to some 1,500 miles. All the power in the world won’t help you win if your engine breaks down mid-race!
So, you need performance, reliability, and durability. No pressure, right? Now add in the fact that you’re also working on a very short timeline. While the design cycle for a passenger vehicle engine might be on order of three years, in the NASCAR world, you’re working with timelines as short as 8-12 months. And there’s a lot that needs to be packed into those months, from planning and analysis to testing and production—any tools that can help speed up your design process can be a major advantage.
So how does Roush Yates leverage CFD in their engine design process?
Per the rules of NASCAR racing, manufacturers are working with homologated parts, i.e., parts that have been officially approved by the organization. Manufacturers can tweak these parts, but they can’t go off and make something brand new. That means that Roush Yates’ engineers are working within well-defined boundaries to try to find minor modifications that result in small but meaningful gains in power and performance.
This is where CFD shines. “Finding the last 0.5% that we’re looking for requires comprehensive 3D modeling,” says Jamie.
Roush Yates uses CONVERGE to model a variety of powertrain components, including intake manifolds, cylinder head ports, exhaust systems, intake systems, and cooling systems. To improve the engine’s gas exchange process, they use CONVERGE to analyze intake manifold flow losses, tune the manifold, and model the exhaust systems. Furthermore, they conduct cooling system evaluations to ensure that the coolant flow rate and system pressure are correct for the engine specifications and the tracks being raced.
“We’ve found CONVERGE’s combustion modeling and meshing technique to be very advantageous for complex geometries and transient simulations,” says Jamie. “Our main goal at Roush Yates is to have the highest power, efficiency, and the most reliable engines in NASCAR. Working toward these goals, we have continuously improved in all these areas throughout the race season with the help of CONVERGE.”
CFD also helps Roush Yates accelerate their development efforts to meet the rapid design cycles required by the sport. The power of simulation lies in the ability to test many different design iterations before manufacturing any components. Compared to physical prototyping, CFD simulations are relatively fast and cheap, and virtually modifying the designs of the components can be done in a matter of clicks.
CONVERGE’s autonomous meshing makes it fast and simple to set up many different cases, because you don’t need to manually create any meshes. This allows you to analyze dozens or even hundreds of design options to determine which ones are the most promising. Only needing to build and test a much smaller number of components leads to a faster time to the track. Moreover, being able to explore so many designs allows you to find those small increases in performance that can end up providing a big advantage on the track.
“CONVERGE enables rapid setup of simulation models, and it has a fast learning curve—new analysts can be brought up to speed on CONVERGE in a matter of weeks,” says Jamie. “Additionally, the more recent versions of CONVERGE have runtimes that scale very well on CPUs. The values of speed and simplicity are some of the most essential capabilities for a CFD tool in the motorsport industry.”
For Roush Yates, their advanced design techniques clearly pay off. Boasting 12 NASCAR Cup Series championships, 17 NASCAR Xfinity Series championships, and hundreds of wins and poles across the two series, Roush Yates is at the top of the game in the motorsport industry. They employ more than 100 people in their engine shop, doing everything from design, simulation, building, and testing, in order to compete on an international stage in upward of 70 events each year.
As Jamie says, “It’s the kind of situation where if you have a job you really love, it’s not so much work as having a great time, continuing to learn and build a great team to achieve our goals.”
No one can say how the next race will unfold, but one thing’s for sure—we’ll continue to cheer on our partners at Roush Yates and do our best to support them on their NASCAR journey.
Learn more about Roush Yates’ engine design process at our upcoming webinar, The Power of CONVERGE for Race Engine Development at Roush Yates Engines, presented by Jamie on September 10 at 10:30am CDT! Register here.
Author:
Allie Yuxin Lin
Marketing Writer
In 2017, Convergent Science expanded to Pune, India, welcoming Ashish Joshi as the founding leader of our new office. Back then, the office was a quiet hub of possibility with plenty of open desks, a one-person team, and the excitement of building something new. We wrote a blog post back then, documenting the early days and the potential the office held. Fast forward eight years, and the office has become a bustling environment, filled with new ideas, forward-thinking people, and dynamic energy. Convergent Science India LLP has grown in not just size, but also spirit as we welcomed new colleagues, took on interesting projects, and worked to build a collaborative culture. But you know the ending, so let’s start at the beginning.

The initial purpose of the India office was to capture the internal combustion engine (ICE) CFD market in the Indian region. The India office was born in “Supreme HQ,” an office space with a maximum capacity of 12 employees. In August 2017, Ashish welcomed his first teammate, Kamlesh Patel.
“Being the first employee at Convergent Science India wasn’t just about joining early—it was about helping shape the foundation of something lasting,” says Kamlesh. “From navigating new challenges and giving training courses to growing alongside brilliant minds and forming lifelong friendships—this journey has been deeply personal and incredibly fulfilling. I’m proud of how far we’ve come, grateful for the people who made it possible, and excited for everything still to come.”
The two became the core of our Indian operations, with Kamlesh focusing primarily on ICE support. Soon after, Harshan Arumugam joined the team to explore how CONVERGE could break into new application areas beyond engines.
Like any start-up organization, the early days of Convergent Science India were riddled with challenges. Even small administrative tasks like opening company bank accounts or accounting for tax compliance were immense hurdles for the three-person team. Not to mention, CONVERGE awareness in India was minimal at the time, so the team had to educate the market while simultaneously training new engineers and building brand credibility.
As operations stabilized, the vision expanded. The team began exploring markets in neighboring Southeast Asia, carrying the CONVERGE message beyond India and across international waters. A major milestone was the successful organization of the first CONVERGE User Conference in the region in 2019. Strong support from the world headquarters played a crucial role in strengthening the company’s reputation and boosting the credibility of the India office. Through this newfound visibility, the India team was able to exert broader regional influence, quickly pulling in new CONVERGE customers.
About a year or so in, Ashish proposed a novel idea: to expand the India office to include functions outside of technical support. The leadership team in Madison approved, and soon, the team began looking to fill roles in Marketing, Documentation, Testing & Validation, Development, and more. By and by, the India office evolved from a small service branch to a full-fledged contributor to Convergent Science’s global operations.
It wasn’t long before our expansion efforts started to show. By 2025, Convergent Science India has become a diverse, cross-functional powerhouse, whose headcount of 36 employees is a powerful marker of how far we’ve come.

“Being not only the newest but also the youngest employee, I feel excited to be working here and learning about CFD. As my first full time position after college, I had no idea what to expect, but I felt both welcomed and challenged,” says Rohit Kamath, the office’s newest hire. “The environment is lively and everyone is so knowledgeable and approachable. Not to mention, the office events, like diya painting or pottery workshops, keep our day-to-day life fresh and exciting. I’m grateful for the sense of community I’ve found here. Whether it’s lunch breaks to the nearby coffee shop for the best 35 ₹ ($0.40) cup of coffee or working on new marketing applications, there’s always something new to learn and grow from.”
As new hires poured in, the original Supreme HQ office space started filling up, proof that our ambitions were quickly outgrowing our humble beginnings. As such, we moved to a larger space at IndiQube Unity Tower. The new office provided more room for our rapidly expanding team. After overseeing the move to the new office building and getting things established there, Ashish moved to the U.S. to pursue a different role within the company. He passed the managerial reins to Yajuvendra Shekhawat (Yaju), who is now the India office’s general manager.
Under Yaju’s guidance, the India team has made inroads into application areas beyond ICEs, although they remain the largest component of our market in the Indian region. Turbomachinery is an emerging focus, and the office has also had success in the oil and gas industry. Simultaneously, the team has been actively encouraging existing clients to use CONVERGE for applications beyond ICEs, broadening our solver’s foothold in R&D environments. The office has also built strong relationships with leading academic institutions, particularly across the IIT system. Many of these prestigious institutions now use CONVERGE for a wide variety of research applications. On the industrial front, the office also works with most of the major automotive companies in India that are engaged in IC engine R&D.
With the current office at IndiQube Unity Tower now nearing capacity, Yaju and his team are actively exploring options for the next phase of expansion. For now, there’s still room to grow, and a lease renewal is under consideration for September 2025. As the team continues to expand in both size and scope, it’s clear that the India office will remain a vital part of Convergent Science’s global operations.
“In 2017, I joined Convergent Science India, fresh out of university with a passion for internal combustion engines,” says Kamlesh. “Ironically, my first interaction with Ashish was a polite rejection for an internship at CS India due to limited resources. Life clearly had other plans. From a two-person team to a thriving office of nearly 40, I’ve had the privilege of growing with CS India from day one. Early on, I was delivering training courses to customers and universities, a challenge that pushed me out of my comfort zone and helped me grow. What makes CS India truly special is the people. From cricket and coffee to road trips and other milestones, we’ve built friendships that go far beyond work. A huge thanks to Ashish for setting the tone and Yaju for continuing to lead with empathy, integrity, and vision. They, along with our global leadership, have given all of us the freedom to grow, explore, and evolve not just as engineers, but as people.”

Kamlesh’s words are indicative of the India office’s vibrant and inclusive culture. The team understands that productivity isn’t just about working hard behind a screen; it can also look like birthday celebrations in the office, outdoor cricket games, and company walks to the nearest convenience store for coffee or other delightful snacks. As we look ahead to the near future, we’re excited to keep expanding, evolving, and writing the next chapter of our office’s story together.
Graphcore has used a range of technologies from Mentor, a Siemens business, to successfully design and verify its latest M2000 platform based on the Graphcore Colossus™ GC200 Intelligence Processing Unit (IPU) processor.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD helps users create thermal models of electronics packages easily and quickly. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to add a component into a direct current (DC) electro-thermal calculation by the given component’s electrical resistance. The corresponding Joule heat is calculated and applied to the body as a heat source. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, the software features a new battery model extraction capability that can be used to extract the Equivalent Circuit Model (ECM) input parameters from experimental data. This enables you to get to the required input parameters faster and easier. Watch this short video to learn how.
Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to create a compact Reduced Order Model (ROM) that solves at a faster rate, while still maintaining a high level of accuracy. Watch this short video to learn how.
High semiconductor temperatures may lead to component degradation and ultimately failure. Proper semiconductor thermal management is key for design safety, reliability and mission critical applications.
A common question from Tecplot 360 users centers around the hardware that they should buy to achieve the best performance. The answer is invariably, it depends. That said, we’ll try to demystify how Tecplot 360 utilizes your hardware so you can make an informed decision in your hardware purchase.
Let’s have a look at each of the major hardware components on your machine and show some test results that illustrate the benefits of improved hardware.
Our test data is an OVERFLOW simulation of a wind turbine. The data consists of 5,863 zones, totaling 263,075,016 elements and the file size is 20.9GB. For our test we:
The test was performed using 1, 2, 4, 8, 16, and 32 CPU-cores, with the data on a local HDD (spinning hard drive) and local SSD (solid state disk). Limiting the number of CPU cores was done using Tecplot 360’s ––max-available-processors command line option.
Data was cleared from the disk cache between runs using RamMap.
Advice: Buy the fastest disk you can afford.
In order to generate any plot in Tecplot 360, you need to load data from a disk. Some plots require more data to be loaded off disk than others. Some file formats are also more efficient than others – particularly file formats that summarize the contents of the file in a single header portion at the top or bottom of the file – Tecplot’s SZPLT is a good example of a highly efficient file format.
We found that the SSD was 61% faster than the HDD when using all 32 CPU-cores for this post-processing task.
All this said – if your data are on a remote server (network drive, cloud storage, HPC, etc…), you’ll want to ensure you have a fast disk on the remote resource and a fast network connection.
With Tecplot 360 the SZPLT file format coupled with the SZL Server could help here. With FieldView you could run in client-server mode.

Advice: Buy the fastest CPU, with the most cores, that you can afford. But realize that performance is not always linear with the number of cores.
Most of Tecplot 360’s data compute algorithms are multi-threaded – meaning they’ll use all available CPU-cores during the computation. These include (but are not limited to): Calculation of new variables, slices, iso-surfaces, streamtraces, and interpolations. The performance of these algorithms improves linearly with the number of CPU-cores available.
You’ll also notice that the overall performance improvement is not linear with the number of CPU-cores. This is because loading data off disk becomes a dominant operation, and the slope is bound to asymptote to the disk read speed.

You might notice that the HDD performance actually got worse beyond 8 CPU-cores. We believe this is because the HDD on this machine was just too slow to keep up with 16 and 32 concurrent threads requesting data.
It’s important to note that with data on the SSD the performance improved all the way to 32 CPU-cores. Further reinforcing the earlier advice – buy the fastest disk you can afford.
Advice: Buy as much RAM as you need, but no more.
You might be thinking: “Thanks for nothing – really, how much RAM do I need?”
Well, that’s something you’re going to have to figure out for yourself. The more data Tecplot 360 needs to load to create your plot, the more RAM you’re going to need. Computed iso-surfaces can also be a large consumer of RAM – such as the iso-surface computed in this test case.
If you have transient data, you may want enough RAM to post-process a couple time steps simultaneously – as Tecplot 360 may start loading a new timestep before unloading data from an earlier timestep.
The amount of RAM required is going to be different depending on your file format, cell types, and the post-processing activities you’re doing. For example:
When testing the amount of RAM used by Tecplot 360, make sure to set the Load On Demand strategy to Minimize Memory Use (available under Options>Performance).

This will give you an understanding of the minimum amount of RAM required to accomplish your task. When set to Auto Unload (the default), Tecplot 360 will maintain more data in RAM, which improves performance. The amount of data Tecplot 360 holds in RAM is dictated by the Memory threshold (%) field, seen in the image above. So you – the user – have control over how much RAM Tecplot 360 is allowed to consume.
Advice: Most modern graphics cards are adequate, even Intel integrated graphics provide reasonable performance. Just make sure you have up to date graphics drivers. If you have an Nvidia graphics card, favor the “Studio” drivers over the “Game Ready” drivers. The “Studio” drivers are typically more stable and offer better performance for the types of plots produced by Tecplot 360.
Many people ask specifically what type of graphics card they should purchase. This is, interestingly, the least important hardware component (at least for most of the plots our users make). Most of the post-processing pipeline is dominated by the disk and CPU, so the time spent rendering the scene is a small percentage of the total.
That said – there are some scenes that will stress your graphics card more than others. Examples are:
Note that Tecplot 360’s interactive graphics performance currently (2023) suffers on Apple Silicon (M1 & M2 chips). The Tecplot development team is actively investigating solutions.
As with most things in life, striking a balance is important. You can spend a huge amount of money on CPUs and RAM, but if you have a slow disk or slow network connection, you’re going to be limited in how fast your post-processor can load the data into memory.
So, evaluate your post-processing activities to try to understand which pieces of hardware may be your bottleneck.
For example, if you:
And again – make sure you have enough RAM for your workflow.
The post What Computer Hardware Should I Buy for Tecplot 360? appeared first on Tecplot Website.
Three years after our merger began, we can report that the combined FieldView and Tecplot team is stronger than ever. Customers continue to receive the highest quality support and new product releases and we have built a solid foundation that will allow us to continue contributing to our customers’ successes long into the future.
This month we have taken another step by merging the FieldView website into www.tecplot.com. Our social media outreach will also be combined. Stay up to date with news and announcements by subscribing and following us on social media.

Members of Tecplot 360 & FieldView teams exhibit together at AIAA SciTech 2023. From left to right: Shane Wagner, Charles Schnake, Scott Imlay, Raja Olimuthu, Jared McGarry and Yves-Marie Lefebvre. Not shown are Scott Fowler and Brandon Markham.
It’s been a pleasure seeing two groups that were once competitors come together as a team, learn from each other and really enjoy working together.
– Yves-Marie Lefebvre, Tecplot CTO & FieldView Product Manager.
Our customers have seen some of the benefits of our merger in the form of streamlined services from the common Customer Portal, simplified licensing, and license renewals. Sharing expertise and assets across teams has already led to the faster implementation of modules such as licensing and CFD data loaders. By sharing our development resources, we’ve been able to invest more in new technology, which will soon translate to increased performance and new features for all products.
Many of the improvements are internal to our organization but will have lasting benefits for our customers. Using common development tools and infrastructure will enable us to be as efficient as possible to ensure we can put more of our energy into improving the products. And with the backing of the larger organization, we have a firm foundation to look long term at what our customers will need in years to come.
We want to thank our customers and partners for their support and continued investment as we endeavor to create better tools that empower engineers and scientists to discover, analyze and understand information in complex data, and effectively communicate their results.
The post FieldView joins Tecplot.com – Merger Update appeared first on Tecplot Website.
One of the most memorable parts of my finite-elements class in graduate school was a comparison of linear elements and higher-order elements for the structural analysis of a dam. As I remember, they were able to duplicate the results obtained with 34 linear elements by using a SINGLE high-order element. This made a big impression on me, but the skills I learned at that time remained largely unused until recently.
You see, my Ph.D. research and later work was using finite-volume CFD codes to solve the steady-state viscous flow. For steady flows, there didn’t seem to be much advantage to using higher than 2nd or 3rd order accuracy.
This has changed recently as the analysis of unsteady vortical flows have become more common. The use of higher-order (greater than second order) computational fluid dynamics (CFD) methods is increasing. Popular government and academic CFD codes such as FUN3D, KESTREL, and SU2 have released, or are planning to release, versions that include higher-order methods. This is because higher-order accurate methods offer the potential for better accuracy and stability, especially for unsteady flows. This trend is likely to continue.
Commercial visual analysis codes are not yet providing full support for higher-order solutions. The CFD 2030 vision states
“…higher-order methods will likely increase in utilization during this time frame, although currently the ability to visualize results from higher order simulations is highly inadequate. Thus, software and hardware methods to handle data input/output (I/O), memory, and storage for these simulations (including higher-order methods) on emerging HPC systems must improve. Likewise, effective CFD visualization software algorithms and innovative information presentation (e.g., virtual reality) are also lacking.”
The isosurface algorithm described in this paper is the first step toward improving higher-order element visualization in the commercial visualization code Tecplot 360.
Higher-order methods can be based on either finite-difference methods or finite-element methods. While some popular codes use higher-order finite-difference methods (OVERFLOW, for example), this paper will focus on higher-order finite-element techniques. Specifically, we will present a memory-efficient recursive subdivision algorithm for visualizing the isosurface of higher-order element solutions.
In previous papers we demonstrated this technique for quadratic tetrahedral, hexahedral, pyramid, and prism elements with Lagrangian polynomial basis functions. In this paper Optimized Implementation of Recursive Sub-Division Technique for Higher-Order Finite-Element Isosurface and Streamline Visualization we discuss the integration of these techniques into the engine of the commercial visualization code Tecplot 360 and discuss speed optimizations. We also discuss the extension of the recursive subdivision algorithm to cubic tetrahedral and pyramid elements, and quartic tetrahedral elements. Finally, we discuss the extension of the recursive subdivision algorithm to the computation of streamlines.
Click an image to view the slideshow
[See image gallery at www.tecplot.com]The post Faster Visualization of Higher-Order Finite-Element Data appeared first on Tecplot Website.
In this release, we are very excited to offer “Batch-Pack” licensing for the first time. A Batch-Pack license enables a single user access to multiple concurrent batch instances of our Python API (PyTecplot) while consuming only a single license seat. This option will reduce license contention and allow for faster turnaround times by running jobs in parallel across multiple nodes of an HPC. All at a substantially lower cost than buying additional license seats.

Data courtesy of ZJ Wang, University of Kansas, visualization by Tecplot.
Get a Free Trial Update Your Software
The post Webinar: Tecplot 360 2022 R2 appeared first on Tecplot Website.
Call 1.800.763.7005 or 425.653.1200
Email info@tecplot.com
Batch-mode is a term nearly as old as computers themselves. Despite its age, however, it is representative of a concept that is as relevant today as it ever was, perhaps even more so: headless (scripted, programmatic, automated, etc.) execution of instructions. Lots of engineering is done interactively, of course, but oftentimes the task is a known quantity and there is a ton of efficiency to be gained by automating the computational elements. That efficiency is realized ten times over when batch-mode meets parallelization – and that’s why we thought it was high-time we offered a batch-mode licensing model for Tecplot 360’s Python API, PyTecplot. We call them “batch-packs.”
Tecplot 360 batch-packs work by enabling users to run multiple concurrent instances of our Python API (PyTecplot) while consuming only a single license seat. It’s an optional upgrade that any customer can add to their license for a fee. The benefit? The fee for a batch-pack is substantially lower than buying an equivalent number of license seats – which makes it easier to justify outfitting your engineers with the software access they need to reach peak efficiency.
Here is a handy little diagram we drew to help explain it better:

Each network license allows ‘n’ seats. Traditionally, each instance of PyTecplot consumes 1 seat. Prior to the 2022 R2 release of Tecplot 360 EX, licenses only operated using the paradigm illustrated in the first two rows of the diagram above (that is, a user could check out up to ‘n’ seats, or ‘n’ users could check out a single seat). Now customers can elect to purchase batch-packs, which will enable each seat to provide a single user with access to ‘m’ instances of PyTecplot, as shown in the bottom row of the figure.
In addition to a cost reduction (vs. purchasing an equivalent number of network seats), batch-pack licensees will enjoy:
We’re excited to offer this new option and hope that our customers can make the most of it.
The post Introducing 360 “Batch-Packs” appeared first on Tecplot Website.
If you care about how you present your data and how people perceive your results, stop reading and watch this talk by Kristen Thyng on YouTube. Seriously, I’ll wait, I’ve got the time.
Which colormap you choose, and which data values are assigned to each color can be vitally important to how you (or your clients) interpret the data being presented. To illustrate the importance of this, consider the image below.

Figure 1. Visualization of the Southeast United States. [4]
Before I explain what a perceptually uniform colormap is, let’s start with everyone’s favorite: the rainbow colormap. We all love the rainbow colormap because it’s pretty and is recognizable. Everyone knows “ROY G BIV” so we think of this color progression as intuitive, but in reality (for scalar values) it’s anything but.
Consider the image below, which represents the “Estimated fraction of precipitation lost to evapotranspiration”. This image makes it appear that there’s a very distinct difference in the scalar value right down the center of the United States. Is there really a sudden change in the values right in the middle of the Great Plains? No – this is an artifact of the colormap, which is misleading you!

Figure 2. This plot illustrates how the rainbow colormap is misleading, giving the perception that there is a distinct different in the middle of the US, when in fact the values are more continuous. [2]
So let’s dive a little deeper into the rainbow colormap and how it compares to perceptually uniform (or perceptually linear) colormaps.
Consider the six images below, what are we looking at? If you were to only look at the top three images, you might get the impression that the scalar value has non-linear changes – while this value (radius) is actually changing linearly. If presented with the rainbow colormap, you’d be forgiven if you didn’t guess that the object is a cone, colored by radius.

Figure 3. An example of how the rainbow colormap imparts information that does not actually exist in the data.
So why does the rainbow colormap mislead? It’s because the color values are not perceptually uniform. In this image you can see how the perceptual changes in the colormap vary from one end to the other. The gray scale and “cmocean – haline” colormaps shown here are perceptually uniform, while the rainbow colormap adds information that doesn’t actually exist.

Figure 4. Visualization of the perceptual changes of three colormaps. [5]
Well, that depends. Tecplot 360 and FieldView are typically used to represent scalar data, so Sequential and Diverging colormaps will probably get used the most – but there are others we will discuss as well.
Sequential colormaps are ideal for scalar values in which there’s a continuous range of values. Think pressure, temperature, and velocity magnitude. Here we’re using the ‘cmocean – thermal’ colormap in Tecplot 360 to represent fluid temperature in a Barracuda Virtual Reactor simulation of a cyclone separator.

Diverging colormaps are a great option when you want to highlight a change in values. Think ratios, where the values span from -1 to 1, it can help to highlight the value at zero.

The diverging colormap is also useful for “delta plots” – In the plot below, the bottom frame is showing a delta between the current time step and the time average. Using a diverging colormap, it’s easy to identify where the delta changes from negative to positive.

If you have discrete data that represent things like material properties – say “rock, sand, water, oil”, these data can be represented using integer values and a qualitative colormap. This type of colormap will do good job in supplying distinct colors for each value. An example of this, from a CONVERGE simulation, can be seen below. Instructions to create this plot can be found in our blog, Creating a Materials Legend in Tecplot 360.

Perhaps infrequently used, but still important to point out is the “phase” colormap. This is particularly useful for values which are cyclic – such as a theta value used to represent wind direction in this FVCOM simulation result. If we were to use a simple sequential colormap (inset plot below) you would observe what appears to be a large gradient where the wind direction is 360o vs. 0o. Logically these are the same value and using the “cmocean – phase” colormap allows you communicate the continuous nature of the data.

There are times when you want to force a break in a continuous colormap. In the image below, the colormap is continuous from green to white
but we want to ensure that values at or below zero are represented as blue – to indicate water. In Tecplot 360 this can be done using the “Override band colors” option, in which we override the first color band to be blue. This makes the plot more realistic and therefore easier to interpret.

The post Colormap in Tecplot 360 appeared first on Tecplot Website.

Ansys has announced that it will acquire Zemax, maker of high-performance optical imaging system simulation solutions. The terms of the deal were not announced, but it is expected to close in the fourth quarter of 2021.
Zemax’s OpticStudio is often mentioned when users talk about designing optical, lighting, or laser systems. Ansys says that the addition of Zemax will enable Ansys to offer a “comprehensive solution for simulating the behavior of light in complex, innovative products … from the microscale with the Ansys Lumerical photonics products, to the imaging of the physical world with Zemax, to human vision perception with Ansys Speos [acquired with Optis]”.
This feels a lot like what we’re seeing in other forms of CAE, for example, when we simulate materials from nano-scale all the way to fully-produced-sheet-of-plastic-scale. There is something to be learned at each point, and simulating them all leads, ultimately, to a more fit-for-purpose end result.
Ansys is acquiring Zemax from its current owner, EQT Private Equity. EQT’s announcement of the sale says that “[w]ith the support of EQT, Zemax expanded its management team and focused on broadening the Company’s product portfolio through substantial R&D investment focused on the fastest growing segments in the optics space. Zemax also revamped its go-to-market sales approach and successfully transitioned the business model toward recurring subscription revenue”. EQT had acquired Zemax in 2018 from Arlington Capital Partners, a private equity firm, which had acquired Zemax in 2015. Why does this matter? Because the path each company takes is different — and it’s sometimes not a straight line.
Ansys says the transaction is not expected to have a material impact on its 2021 financial results.

Last year Sandvik acquired CGTech, makers of Vericut. I, like many people, thought “well, that’s interesting” and moved on. Then in July, Sandvik announced it was snapping up the holding company for Cimatron, GibbsCAM (both acquired by Battery Ventures from 3D Systems), and SigmaTEK (acquired by Battery Ventures in 2018). Then, last week, Sandvik said it was adding Mastercam to that list … It’s clearly time to dig a little deeper into Sandvik and why it’s doing this.
First, a little background on Sandvik. Sandvik operates in three main spheres: rocks, machining, and materials. For the rocks part of the business, the company makes mining/rock extraction and rock processing (crushing, screening, and the like) solutions. Very cool stuff but not relevant to the CAM discussion.
The materials part of the business develops and sells industrial materials; Sandvik is in the process of spinning out this business. Also interesting but …
The machining part of the business is where things get more relevant to us. Sandvik Machining & Manufacturing Solutions (SMM) has been supplying cutting tools and inserts for many years, via brands like Sandvik, SECO, Miranda, Walter, and Dormer Pramet, and sees a lot of opportunity in streamlining the processes around the use of specific tools and machines. Light weighting and sustainability efforts in end-industries are driving interest in new materials and more complex components, as well as tighter integration between design and manufacturing operations. That digitalization across an enterprise’s areas of business, Sandvik thinks, plays into its strengths.
According to info from the company’s 2020 Capital Markets Day, rocks and materials are steady but slow revenue growers. The company had set a modest 5% revenue growth target but had consistently been delivering closer to 3% — what to do? Like many others, the focus shifted to (1) software and (2) growth by acquisition. Buying CAM companies ticked both of those boxes, bringing repeatable, profitable growth. In an area the company already had some experience in.
Back to digitalization. If we think of a manufacturer as having (in-house or with partners) a design function, which sends the concept on to production preparation, then to machining, and, finally, to verification/quality control, Sandvik wants to expand outwards from machining to that entire world. Sandvik wants to help customers optimize the selection of tools, the machining strategy, and the verification and quality workflow.
The Manufacturing Solutions subdivision within SMM was created last year to go after this opportunity. It’s got 3 areas of focus: automating the manufacturing process, industrializing additive manufacturing, and expanding the use of metrology to real-time decision making.
The CGTech acquisition last year was the first step in realizing this vision. Vericut is prized for its ability to work with any CAM, machine tool, and cutting tool for NC code simulation, verification, optimization, and programming. CGTech is a long-time supplier of Vericut software to Sandvik’s Coromant production units, so the companies knew one another well. Vericut helps Sandvik close that digitalization/optimization loop — and, of course, gives it access to the many CAM users out there who do not use Coromant.
But verification is only one part of the overall loop, and in some senses, the last. CAM, on the other hand, is the first (after design). Sanvik saw CAM as “the most important market to enter due to attractive growth rates – and its proximity to Sandvik Manufacturing and Machining Solutions’ core business.” Adding Cimatron, GibbsCAM, SigmaTEK, and Mastercam gets Sandvik that much closer to offering clients a set of solutions to digitize their complete workflows.
And it makes business sense to add CAM to the bigger offering:
To head off one question: As of last week’s public statements, anyway, Sandvik has no interest in getting into CAD, preferring to leave that battlefield to others, and continue on its path of openness and neutrality.
And because some of you asked: there is some overlap in these acquisitions, but remarkably little, considering how established these companies all are. GibbsCAM is mostly used for production milling and turning; Cimatron is used in mold and die — and with a big presence in automotive, where Sandvik already has a significant interest; and SigmaNEST is for sheet metal fabrication and material requisitioning.
One interesting (to me, anyway) observation: 3D Systems sold Gibbs and Cimatron to Battery in November 2020. Why didn’t Sandvik snap it up then? Why wait until July 2021? A few possible reasons: Sandvik CEO Stefan Widing has been upfront about his company’s relative lack of efficiency in finding/closing/incorporating acquisitions; perhaps it was simply not ready to do a deal of this type and size eight months earlier. Another possible reason: One presumes 3D Systems “cleaned up” Cimatron and GibbsCAM before the sale (meaning, separating business systems and financials from the parent, figuring out HR, etc.) but perhaps there was more to be done, and Sandvik didn’t want to take that on. And, finally, maybe the real prize here for Sandvik was SigmaNEST, which Battery Ventures had acquired in 2018, and Cimatron and GibbsCAM simply became part of the deal. We may never know.
This whole thing is fascinating. A company out of left field, acquiring these premium PLMish assets. Spending major cash (although we don’t know how much because of non-disclosures between buyer and sellers) for a major market presence.
No one has ever asked me about a CAM roll-up, yet I’m constantly asked about how an acquirer could create another Ansys. Perhaps that was the wrong question, and it should have been about CAM all along. It’s possible that the window for another company to duplicate what Sandvik is doing may be closing since there are few assets left to acquire.
Sandvik’s CAM acquisitions haven’t closed yet, but assuming they do, there’s a strong fit between CAM and Sandvik’s other manufacturing-focused business areas. It’s more software, with its happy margins. And, finally, it lets Sandvik address the entire workflow from just after component design to machining and on to verification. Mr. Widing says that Sandvik first innovated in hardware, then in service – and now, in software to optimize the component part manufacturing process. These are where gains will come, he says, in maximizing productivity and tool longevity. Further out, he sees, measuring every part to see how the process can be further optimized. It’s a sound investment in the evolution of both Sandvik and manufacturing.
We all love a good reinvention story, and how Sandvik executes on this vision will, of course, determine if the reinvention was successful. And, of course, there’s always the potential for more news of this sort …

I missed this last month — Sandvik also acquired Cambrio, which is the combined brand for what we might know better as GibbsCAM (milling, turning), Cimatron (mold and die), and SigmaNEST (nesting, obvs). These three were spun out of 3D Systems last year, acquired by Battery Ventures — and now sold on to Sandvik.
This was announced in July, and the acquisition is expected to close in the second half of 2021 — we’ll find out on Friday if it already has.
At that time. Sandvik said its strategic aim is to “provide customers with software solutions enabling automation of the full component manufacturing value chain – from design and planning to preparation, production and verification … By acquiring Cambrio, Sandvik will establish an important position in the CAM market that includes both toolmaking and general-purpose machining. This will complement the existing customer offering in Sandvik Manufacturing Solutions”.
Cambrio has around 375 employees and in 2020, had revenue of about $68 million.
If we do a bit of math, Cambrio’s $68 million + CNC Software’s $60 million + CGTech’s (that’s Vericut’s maker) of $54 million add up to $182 million in acquired CAM revenue. Not bad.
More on Friday.

CNC Software and its Mastercam have been a mainstay among CAM providers for decades, marketing its solutions as independent, focused on the workgroup and individual. That is about to change: Sandvik, which bought CGTech late last year, has announced that it will acquire CNC Software to build out its CAM offerings.
According to Sandvik’s announcement, CNC Software brings a “world-class CAM brand in the Mastercam software suite with an installed base of around 270,000 licenses/users, the largest in the industry, as well as a strong market reseller network and well-established partnerships with leading machine makers and tooling companies”.
We were taken by surprise by the CGTech deal — but shouldn’t be by the Mastercam acquisition. Stefan Widing, Sandvik’s CEO explains it this way: “[Acquiring Mastercam] is in line with our strategic focus to grow in the digital manufacturing space, with special attention on industrial software close to component manufacturing. The acquisition of CNC Software and the Mastercam portfolio, in combination with our existing offerings and extensive manufacturing capabilities, will make Sandvik a leader in the overall CAM market, measured in installed base. CAM plays a vital role in the digital manufacturing process, enabling new and innovative solutions in automated design for manufacturing.” The announcement goes on to say, “CNC Software has a strong market position in CAM, and particularly for small and medium-sized manufacturing enterprises (SME’s), something that will support Sandvik’s strategic ambitions to develop solutions to automate the manufacturing value chain for SME’s – and deliver competitive point solutions for large original equipment manufacturers (OEM’s).”
Sandvik says that CNC Software has 220 employees, with revenue of $60 million in 2020, and a “historical annual growth rate of approximately 10 percent and is expected to outperform the estimated market growth of 7 percent”.
No purchase price was disclosed, but the deal is expected to close during the fourth quarter.
Sandvik is holding a call about this on Friday — more updates then, if warranted.

Bentley continues to grow its deep expertise in various AEC disciplines — most recently, expanding its focus in underground resource mapping and analysis. This diversity serves it well; read on.
In Q2,
Unlike AspenTech, Bentley’s revenue growth is speeding up (total revenue up 21% in Q2, including a wee bit from Seequent, and up 17% for the first six months of 2021). Why the difference? IMHO, because Bentley has a much broader base, selling into many more end industries as well as to road/bridge/water/wastewater infrastructure projects that keep going, Covid or not. CEO Greg Bentley told investors that some parts of the business are back to —or even better than— pre-pandemic levels, but not yet all. He said that the company continues to struggle in industrial and resources capital expenditure projects, and therefore in the geographies (theMiddle East and Southeast Asia) that are the most dependent on this sector. This is balanced against continued success in new accounts and the company’s reinvigorated selling to small and medium enterprises via its Virtuosity subsidiary — and in a resurgence in the overall commercial/facilities sector. In general, it appears that sales to contractors such as architects and engineers lag behind those to owners and operators of commercial facilities —makes sense as many new projects are still on pause until pandemic-related effects settle down.
One unusual comment from Bentley’s earnings call that we’re going to listen for on others: The government of China is asking companies to explain why they are not using locally-grown software solutions; it appears to be offering preferential tax treatment for buyers of local software. As Greg Bentley told investors, “[d]uring the year to date, we have experienced a rash of unanticipated subscription cancellations within the mid-sized accounts in China that have for years subscribed to our China-specific enterprise program … Because we don’t think there are product issues, we will try to reinstate these accounts through E365 programs, where we can maintain continuous visibility as to their usage and engagement”. So, to recap: the government is using taxation to prefer one set of vendors over another, and all Bentley can do (really) is try to bring these accounts back and then monitor them constantly to keep on top of emerging issues. FWIW, in the pre-pandemic filings for Bentley’s IPO, “greater China, which we define as the Peoples’ Republic of China, Hong Kong and Taiwan … has become one of our largest (among our top five) and fastest-growing regions as measured by revenue, contributing just over 5% of our 2019 revenues”. Something to watch.
The company updated its financial outlook for 2021 to include the recent Seequent acquisition and this moderate level of economic uncertainty. Bentley might actually join the billion-dollar club on a pro forma basis — as if the acquisition of Seequent had occurred at the beginning of 2021. On a reported basis, the company sees total revenue between $945 million and $960 million, or an increase of around 18%, including Seequent. Excluding Seequent, Bentley sees organic revenue growth of 10% to 11%.
Much more here, on Bentley’s investor website.

We still have to hear from Autodesk, but there’s been a lot of AECish earnings news over the last few weeks. This post starts a modest series as we try to catch up on those results.
AspenTech reported results for its fiscal fourth quarter, 2021 last week. Total revenue of $198 million in DQ4, down 2% from a year ago. License revenue was $145 million, down 3%; maintenance revenue was $46 million, basically flat when compared to a year earlier, and services and other revenue was $7 million, up 9%.
For the year, total revenue was up 19% to $709 million, license revenue was up 28%, maintenance was up 4% and services and other revenue was down 18%.
Looking ahead, CEO Antonio Pietri said that he is “optimistic about the long-term opportunity for AspenTech. The need for our customers to operate their assets safely, sustainably, reliably and profitably has never been greater … We are confident in our ability to return to double-digit annual spend growth over time as economic conditions and industry budgets normalize.” The company sees fiscal 2022 total revenue of $702 million to $737 million, which is up just $10 million from final 2021 at the midpoint.
Why the slowdown in FQ4 from earlier in the year? And why the modest guidance for fiscal 2022? One word: Covid. And the uncertainty it creates among AspenTech’s customers when it comes to spending precious cash. AspenTech expects its visibility to improve when new budgets are set in the calendar fourth quarter. By then, AspenTech hopes, its customers will have a clearer view of reopening, consumer spending, and the timing of an eventual recovery.
Lots more detail here on AspenTech’s investor website.
Next up, Bentley. Yup. Alphabetical order.
There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.
CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation
Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.
Conjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature
It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.
CFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study
Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).
CFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study
One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.
Dragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath
The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.
2 Hour Marathon Attempt
In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:
First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:
/*--------------------------------*- C++ -*----------------------------------*\
========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration | Website: https://openfoam.org
\\ / A nd | Version: 6
\\/ M anipulation |
\*---------------------------------------------------------------------------*/
FoamFile
{
version 2.0;
format ascii;
class dictionary;
object blockMeshDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
convertToMeters 1;
vertices
(
(-1 0 0) // 0
(0 0 0) // 1
(1 0 0) // 2
(2 0 0) // 3
(-1 2 0) // 4
(0 2 0) // 5
(1 2 0) // 6
(2 2 0) // 7
(-1 0 1) // 8
(0 0 1) // 9
(1 0 1) // 10
(2 0 1) // 11
(-1 2 1) // 12
(0 2 1) // 13
(1 2 1) // 14
(2 2 1) // 15
);
blocks
(
hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)
);
edges
(
);
boundary
(
inlet
{
type patch;
faces
(
(0 8 12 4)
);
}
outlet
{
type patch;
faces
(
(3 7 15 11)
);
}
lowerWall
{
type wall;
faces
(
(0 1 9 8)
(1 2 10 9)
(2 3 11 10)
);
}
upperWall
{
type patch;
faces
(
(4 12 13 5)
(5 13 14 6)
(6 14 15 7)
);
}
frontAndBack
{
type empty;
faces
(
(8 9 13 12)
(9 10 14 13)
(10 11 15 14)
(1 0 4 5)
(2 1 5 6)
(3 2 6 7)
);
}
);
// ************************************************************************* //
This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!
So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:
edges
(
polyLine 1 2
(
(0 0 0)
(0.1 0.0309016994 0)
(0.2 0.0587785252 0)
(0.3 0.0809016994 0)
(0.4 0.0951056516 0)
(0.5 0.1 0)
(0.6 0.0951056516 0)
(0.7 0.0809016994 0)
(0.8 0.0587785252 0)
(0.9 0.0309016994 0)
(1 0 0)
)
polyLine 9 10
(
(0 0 1)
(0.1 0.0309016994 1)
(0.2 0.0587785252 1)
(0.3 0.0809016994 1)
(0.4 0.0951056516 1)
(0.5 0.1 1)
(0.6 0.0951056516 1)
(0.7 0.0809016994 1)
(0.8 0.0587785252 1)
(0.9 0.0309016994 1)
(1 0 1)
)
);
The sub-dictionary above is just a list of points on the curve . The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!
Cheers.
This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM® andOpenCFD® trademarks.
Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.
Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.
In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.
Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).
In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.
For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).
In this post, I’ll use a simple case I did previously (https://curiosityfluids.com/2016/03/28/mach-1-5-flow-over-23-degree-wedge-rhocentralfoam/) as an example and produce some synthetic Schlieren and Shadowgraph images using the data.
Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.
In ParaView the necessary tool for this is:
Gradient of Unstructured DataSet:

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.
To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:


The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:
Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!
To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.
Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.
This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.
Hopefully this post will be helpful to some of you out there. Cheers!
Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post: https://curiosityfluids.com/2019/02/15/sutherlands-law/
The law given by:
It is also often simplified (as it is in OpenFOAM) to:
In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.
So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.
So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.
By far the simplest way to achieve this is using Python and the Scipy.optimize package.
Step 1: Get Data
The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (
https://webbook.nist.gov/), but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:
| Temparature (K) | Viscosity (Pa.s) |
| 200 |
0.000012924 |
| 400 | 0.000022217 |
| 600 | 0.000029602 |
| 800 | 0.000035932 |
| 1000 | 0.000041597 |
| 1200 | 0.000046812 |
| 1400 | 0.000051704 |
| 1600 | 0.000056357 |
| 1800 | 0.000060829 |
| 2000 | 0.000065162 |
This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).
Step 2: Use python to fit the data
If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.
First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
Now we define the sutherland function:
def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)
Next we input the data:
T=[200,
400,
600,
800,
1000,
1200,
1400,
1600,
1800,
2000]
mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]
Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.
popt = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
Now we can just output our data to the screen and plot the results if we so wish:
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')
xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)
plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()
Overall the entire code looks like this:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def sutherland(T, As, Ts):
return As*T**(3/2)/(Ts+T)
T=[200, 400, 600,
800,
1000,
1200,
1400,
1600,
1800,
2000]
mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]
popt, pcov = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')
xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)
plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()
And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!

In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.
This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.
The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.
There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.
While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.
Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:
(1) Understand CFD
This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:
(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish
(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera
(c) Computational fluid dynamics – the basics with applications – By John D. Anderson
(2) Understand fluid dynamics
Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.
(3) Avoid building cases from scratch
Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!
As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.
(4) Using Ubuntu makes things much easier
This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.
I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.
(5) If you’re struggling, simplify
Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.
(6) Familiarize yourself with the cfd-online forum
If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.
(7) The results from checkMesh matter
If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:
http://www.wolfdynamics.com/wiki/OFtipsandtricks.pdf
(8) CFL Number Matters
If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.
For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:
https://holzmann-cfd.de/publications/mathematics-numerics-derivations-and-openfoam
For the record, this points falls into point (1) of Understanding CFD.
(9) Work through the OpenFOAM Wiki “3 Week” Series
If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:
https://wiki.openfoam.com/%223_weeks%22_series
If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.
(10) OpenFOAM is not a second-tier software – it is top tier
I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (
https://www.linkedin.com/feed/update/urn:li:groupPost:1920608-6518408864084299776/?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518932944235610112%29&replyUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518956058403172352%29).
In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.
(11) Meshing… Ugh Meshing
For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post (https://curiosityfluids.com/2019/02/14/high-level-overview-of-meshing-for-openfoam/) most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.
Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.
Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.
This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM® andOpenCFD® trade marks.

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.
Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.
The two main ways that I have meshed airfoils to date has been:
(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.
But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.
The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections
In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.
There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!
Hopefully, this is useful to some of you out there!
You can download the script here:
https://github.com/curiosityFluids/curiosityFluidsAirfoilMesher
Here you will also find a template based on the airfoil2D OpenFOAM tutorial.
(1) Copy curiosityFluidsAirfoilMesher.py to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify curiosityFluidsAirfoilMesher.py to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3 curiosityFluidsAirfoilMesher.py
(5) If no errors – run blockMesh
PS
You need to run this with python 3, and you need to have numpy installed
The inputs for the script are very simple:
ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.
airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.
DomainHeight: This is the height of the domain in multiples of chords.
WakeLength: Length of the wake domain in multiples of chords
firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator
growthRate: Boundary layer growth rate
MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.
The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.
BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil
LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge
TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge
inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity
trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.
Inputs:

With the above inputs, the grid looks like this:


Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:




The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:

With these inputs, the result looks like this:


Mesh Quality:

Visualizing the mesh quality:




Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).
Inputs:

Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.


Grid Quality:

Visualizing the grid quality




Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.
The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!
Comments and bug reporting encouraged!
DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of the OPENFOAM® and OpenCFD® trademarks.
Here is a useful little tool for calculating the properties across a normal shock.
If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!
Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.