Conversation
Provides four subcommands for querying IFC models with JSON output: - summary: schema, entity counts, project info - tree: spatial hierarchy (Project > Site > Building > Storey > elements) - info: deep element inspection (attributes, psets, type, material, container) - select: filter elements using selector syntax
Runtime introspection auto-discovers all API functions. Subcommands: list (modules/functions), docs (parameter documentation), run (execute with type-coerced CLI arguments). Output is JSON to stdout, supports --dry-run validation and -o alternate output path.
New subcommand: ifcquery <file> relations <element_id> Returns all relationships for an element organised by category: hierarchy (parent, container, aggregate, nest), children (contained, parts, components, openings), type relationships, groups, systems, zones, material, referenced structures, and connections/ports. Empty categories are omitted from output. Optional --traverse up flag walks the hierarchy from the element up to IfcProject, returning the chain as a list.
Uses ifcopenshell.geom.tree API directly to check an element for intersections and clearance violations against sibling elements (--scope storey) or the entire model (--scope all).
|
This is so cool, this completely eliminates the problems with this idiosyncratic context-less serialization and I see you have things like ifcclash to figure out geometric relationships; which is huge to make sense of the model, explicit points carry so little semantics otherwise. Maybe the only thing missing is something like render to image for something multi-modal. Here's something for pyvista that I have on my disk, maybe you can adapt it into a subcommand. Is it ok to forward this to others or do you prefer to keep it quiet for some time? import os
import glob
import numpy as np
import ifcopenshell
import ifcopenshell.geom
import pyvista as pv
IFC_FOLDER = "."
def add_shape_to_plotter(shape, plotter: pv.Plotter):
geom = shape.geometry
verts = np.array(geom.verts, dtype=float).reshape(-1, 3)
if verts.size == 0:
return
faces = np.array(geom.faces, dtype=int).reshape(-1, 3)
material_ids = np.array(geom.material_ids, dtype=int)
for midx, mat in enumerate(geom.materials):
tri_mask = material_ids == midx
if not np.any(tri_mask):
continue
sub_faces = faces[tri_mask]
faces_pv = np.hstack(
[np.full((sub_faces.shape[0], 1), 3, dtype=int), sub_faces]
).ravel()
mesh = pv.PolyData(verts, faces_pv)
diffuse = np.array(mat.diffuse.components)
diffuse = np.clip(diffuse, 0.0, 1.0)
color = tuple((diffuse * 255).astype(np.uint8))
transparency = mat.transparency if mat.transparency == mat.transparency else 0.
opacity = float(np.clip(1.0 - transparency, 0.0, 1.0))
plotter.add_mesh(
mesh,
color=color,
opacity=opacity,
show_edges=False,
)
edges = np.array(geom.edges, dtype=int).reshape(-1, 2)
if edges.size > 0:
lines_pv = np.hstack(
[np.full((edges.shape[0], 1), 2, dtype=int), edges]
).ravel()
line_mesh = pv.PolyData(verts, lines=lines_pv)
plotter.add_mesh(
line_mesh,
line_width=1,
color="black",
)
def ifc_to_pyvista_screenshot(ifc_path: str, screenshot_path: str):
print(f"Processing: {ifc_path}")
settings = ifcopenshell.geom.settings(USE_WORLD_COORDS=True)
ifc_file = ifcopenshell.open(ifc_path)
iterator = ifcopenshell.geom.iterator(settings, ifc_file, exclude=('IfcOpeningElement',))
if not iterator.initialize():
print(f" No geometry found in {ifc_path}")
return
plotter = pv.Plotter(off_screen=True)
plotter.background_color = "white"
while True:
shape = iterator.get()
add_shape_to_plotter(shape, plotter)
if not iterator.next():
break
os.makedirs(os.path.dirname(screenshot_path), exist_ok=True)
print(f" Saving screenshot to: {screenshot_path}")
plotter.show(screenshot=screenshot_path, auto_close=True)
def main():
render_dir = os.path.join(IFC_FOLDER, "img")
os.makedirs(render_dir, exist_ok=True)
ifc_files = glob.glob(os.path.join(IFC_FOLDER, "*.ifc"))
if not ifc_files:
print(f"No IFC files found in {IFC_FOLDER}")
return
for ifc_path in ifc_files:
base = os.path.splitext(os.path.basename(ifc_path))[0]
out_png = os.path.join(render_dir, f"{base}.png")
ifc_to_pyvista_screenshot(ifc_path, out_png)
if __name__ == "__main__":
main()
|
|
@aothms of course you can share it :) I'm just putting it in a technical forum so I don't have to answer non-technical questions. The viewer looks fine, you should commit it to IfcOpenShell! I'm just using the revert button in Bonsai to view model changes as they happen. It urgently needs porting to an MCP server, the CLI interface works fine, but every command involves reloading the entire IFC file. An MCP server would load the file once and be an order of magnitude faster. |
|
@aothms The prompt should also include something about using The ifcclash functionality appears to work, but it wouldn't surprise me if there were bugs here. I have another CLI tool that queries a topologic model in the IFC (my Homemaker generated buildings contain a full topologic CellComplex serialisation) - so you can give the bot an understanding of place and spatial relationships, but this requires a Homemaker model, and doesn't update when you move a wall. |
|
To be clear, this does actually work. I have taken a model, deleted windows using a vague location specification, moved remaining windows, changed their type, and raised the height of a storey - all without any major problems using just a chat interface - the bot needed to be told that editing windows involves editing their openings, but after that it had no problem. |
|
Maybe it's time that we think about unifying this API (maybe not to the depth of homemaker functionality) but the idea of a cellcomplex / compositesolid kind of topology that guides the creation of IFC elements. Because that is probably the right level of abstraction dealing with programmatic creation of IFC (for the space bounding elements at least). (We also have another more declarative/scenegraph-style geometry+ifc creation module hopefully soon to be merged, I think that would be quite complementary) |
Model Context Protocol server implementation for AI agent access to IFC projects
|
@aothms Ok, I added an MCP server implementation: Seems to work exactly the same but much faster with big models. |
|
I think we should be alarmed. I dislike the hype and bubble of AI, I hate that it depends on someone else's data centre running proprietary software consuming vast resources. I accept the entire TESCREAL criticism of the industry, and this critique is more valid and concerning than ever, but the one criticism that is no longer true is that 'it doesn't work'. With coordinators like gas town now a thing, this is very doable: Imagine a swarm of bots working on the same building project, working through a snagging list, and adding to it whenever they find a problem, generating IDS regression tests as they go. One bot is making sure that every material and construction has thermal properties, others are making sure the spatial boundaries and structural model are coordinated with the architecture, some or all are doing clash detection as they go. A pricing bot is finding prices on the internet, assigning them and building a bill of materials. The thermal model is checked through openstudio, with the results feeding back into the snagging queue. The structural model is checked for stability and loading, because all the elements in the model have density and mass, and all IsStructural elements have associated structural elements that line up and have equivalent specification. Other bots are checking all doors in firewalls have the correct fire specification. The schedule is checked for consistency, for tasks taking place in the same zone simultaneously, for elements that are not in tasks, with any problems raised in the snagging queue. Bots are checking drawings, ensuring that all elements are visible and labelled somewhere, checking that dimensions actually point to something, that section cuts are called out on plans, that labels are not clashing. All this work is on the same IFC model, in parallel, using git branches, with the results merged back as they are checked and approved, lots of this merging can happen without human approval. Other bots find ways to reduce costs by swapping out materials and structure. A human designer can interfere with the model, move walls, change the design or whatever, and the bots will clean up the mess afterwards. This all sounds utopian, but it also sounds like hell (give me a month and a million and I'll have it all stood up and running). |
|
I'm not all pessimistic of this. I recognise this in my own work as well, it's much easier now to explore more options / algorithms / approaches to a single problem. I imagine the same will happen with design variants and iteration. Problem shifts to understanding the problem, working out details will become more and more automated. Ultimately freeing up time to build a better more sustainable world. Probably I'm naive. But we can do our best to change the world for the better? Make sure our things work on open weights or self-hosted models etc. |
|
Converted to draft, since you said 'don't merge'. |
|
@theoryshaw I've removed the 'do not merge' notice because this doesn't mess with any existing code, so can't break anything. I'd like a bit more testing of ifcquery in particular before merging. |
|
It's really amazing times we live in
before:
after:
I will submit this as a PR to your branch @brunopostle for you to review. |
|
@aothms the interesting thing is that when the bot gets it wrong, you just tell it what it did wrong, it sorts it out, and remembers for next time. |
Guidelines for external contributors using AI coding tools, covering licensing, AI disclosure requirements, PR scope, commit style, code formatting, and testing expectations. Generated with the assistance of an AI coding tool.
Yes of course, this is all intended to be merged into the release if the quality is good enough. I notice that your pull-request removes some of the "This file was generated with the assistance of an AI coding tool." comments. I've cherry-picked the AGENTS.md file from the v0.8.0 branch, this should fix this sort of thing in the future. |
Actually create the ifcquery, ifcedit and ifcmcp executables on pip install instead of relying on `python3 -m ifcquery` style usage
Added them back. Also renamed |
I find it also interesting that it has the habit to check things. After deleting it did another query for windows and a info call on a deleted id to test. But at the same time there is also a huge variance in the answers even in these simple cases. For example, when asking for the windows by level sometimes it starts from the window query walking upwards, sometimes it goes from the tree downwards. In the former case you don't get levels any levels with 0 windows. On the one hand this is of course due to the randomness/temperature, but maybe my question is also simply underconstrained and is therefore more susceptible to this kind of variance. These are probably also things that can make it into the system prompt over time:
|
It definitely needs a reminder that doors and windows are often associated with openings, boundaries and structural elements that have closely related geometry. |
|
@aothms Here's a tool doing a similar thing with topologic: |
…cmcp
ifcquery: validate [--rules], schedule [--depth N], cost [--depth N],
schema <EntityType>. ifcedit: quantify list/run subcommands using ifc5d
QTO rules. schedule and cost support max_depth to limit tree expansion,
replacing truncated levels with {truncated, count}. ifcmcp gains matching
session methods, @server.tool() decorators, and OpenAI tool schemas.
Generated with the assistance of an AI coding tool.
Adds ifc_validate, ifc_schedule, ifc_cost, ifc_schema, and ifc_quantify to the hardcoded tool list in app.js to match the extended ifcmcp session. Generated with the assistance of an AI coding tool.
Generated with the assistance of an AI coding tool.
|
Some more tests, this time adding pricing information and generating a Bill of Quantities. Notice that I didn't really give it and useful instructions, but it got there in the end regardless; Here's the generated BoQ:
I've put this example project with the CLAUDE.md findings here: https://github.com/brunopostle/simple-ifc |
|
The https://github.com/brunopostle/simple-ifc project has CI tests that are run on github: From the Github CI report:
|
|
More tests: "There are bug reports for this project on github": brunopostle/simple-ifc#1 [edit: to be clear, snagging issues can be created on github for this IFC project and the bot tries to fix them] |
This now works: ifcedit run model.ifc project.append_asset --library library.ifc --element 5711 Generated with the assistance of an AI coding tool.
Renders IFC model geometry to a PNG image using pyvista (off_screen).
ifcquery CLI:
ifcquery <file> render [-o out.png] [--selector QUERY]
[--element ID[,ID...]] [--view iso|top|south|north|east|west]
ifcmcp MCP tool:
ifc_render(selector, element_ids, view) -> list[ImageContent]
Returns base64-encoded PNG as an MCP ImageContent block so the agent
can inspect the model geometry inline.
Generated with the assistance of an AI coding tool.
|
@aothms, I added a render tool to ifcquery, so you can now do things like:
..or use a selector query and/or highlight particular elements:
This all works with the ifcmcp interface, so claude is busy looking at renders to see what to do: The highlighted render shows the wall (dark) and the door (dark element in the center-left of the wall), with the tree blob visible in white/ghost.
The door appears to be on the LEFT portion of the south wall (the west section), which is where I intended it to be. Looking at the highlighted
render:
- The south wall is visible (dark, center-left of image)
- The door geometry appears on the LEFT SIDE of the wall (slightly left of center of the wall)
- The tree is visible as a large white ghost blob on the right
So the door IS on the left (west) portion of the south wall, which is correct.
The opening successfully voids the wall geometry. Looking back at the south elevation, I'm noticing the door might be harder to spot than
expected—it's positioned in that left-center area of the wall, which could make it blend in with the overall composition or appear smaller than
anticipated in the 2D view.
Let me re-render the full south elevation to see all elements clearly. The issue might be that the wall's boolean operation needs to be properly
evaluated by the renderer, or the door's orientation is making it less visible depending on how the local axes are aligned. |
Guard the face array reshape so elements whose triangulation is not divisible by 3 are silently dropped. Also wrap each _add_shape() call in a try/except so a broken shape never aborts the whole render. Generated with the assistance of an AI coding tool.
New ifcquery modules list geometric representation contexts (with step IDs, context type, identifier, target view, parent) and material sets (IfcMaterial, layer sets, constituent sets, profile sets). Exposed as CLI subcommands and MCP tools. Generated with the assistance of an AI coding tool.
JSON parses whole numbers as int, but ifcopenshell's C++ binding requires Python floats for AGGREGATE OF DOUBLE attributes such as DirectionRatios and Coordinates. Convert list elements to float when the list already contains at least one float, leaving pure-integer lists (face indices etc.) unchanged. Generated with the assistance of an AI coding tool.
Generated with the assistance of an AI coding tool.
Generated with the assistance of an AI coding tool.
Exposes ShapeBuilder geometry methods via the MCP server: discovery (list), documentation (docs), and execution (shape) with entity ID coercion and numpy serialisation. Generated with the assistance of an AI coding tool.
















I'm putting this on a secret branch in an attempt to avoid being slop-shamed, I would slop-shame me. All code here is generated by an LLM.
This branch adds two new command-line tools:
src/ifcqueryandsrc/ifcedit, they can be installed separately using something likepip install -e src/ifcquery/andpip install -e src/ifcedit/. More information about these tools is in their respective README files, here are summaries:Actually ifcedit is really cool, and does a lot for such a small tool, but this is not the reason for creating this. What these tools provide is just enough discoverable functionality for an AI agent (I'm using Claude Code) to query and manipulate IFC files in the same way that the same agent can be used to work with source code.
To start you first need to have both tools working and in your
PATH, which might be simple or hard depending on your system, then create a folder, add an IFC file to the folder, and start up your coding agent.You need a prompt to get going, something like this (untested, but it will probably work):