We propose a novel approach that leverages both labeled 3D shapes and semantic information contained in the labels, to generate semantically-meaningful shape descriptors. A neural network is trained to generate shape descriptors that lie close to a vector representation of the shape class, given a vector space of words. This method is easily extendable to range scans, hand-drawn sketches and images. This makes cross-modal retrieval possible, without a need to design different methods depending on the query type. We show that sketch-based shape retrieval using semantic-based descriptors outperforms the state-of-the-art by large margins, and mesh-based retrieval generates results of higher relevance to the query, than current deep shape descriptors.
Given a BOF shape retrieval framework, we evaluate how different parts of such a frameworks, such as keypoint detection and local feature encoding can affect retrieval performance.
We propose a cluster-based approach to point set saliency detection, a challenge since point sets lack topological information. A point set is first decomposed into small clusters, using fuzzy clustering. We evaluate cluster uniqueness and spatial distribution of each cluster and combine these values into a cluster saliency function. Finally, the probabilities of points belonging to each cluster are used to assign a saliency to each point. Our approach detects fine-scale salient features and uninteresting regions consistently have lower saliency values. We evaluate the proposed saliency model by testing our saliency-based keypoint detection against a 3D interest point detection benchmark. The evaluation shows that our method achieves a good balance between false positive and false negative error rates, without using any topological information.
./generate_saliency.app/Contents/MacOS/generate_saliency --model=D00001.off --radius_percent=0.02 --num_segments=100 --verbose=0 --normals_from_topology=0 --output_prefix=./It will generate two files, prefixed by output_prefix and the input basename: one .ply file containing a colored mesh, and one text file containing saliency values between 0 and 1.
A challenge in vector graphics is to define primitives that offer flexible manipulation of colour gradients. We propose a new primitive, called a shading curve, that supports explicit and local gradient control. This is achieved by associating shading profiles to each side of the curve. These shading profiles, which can be manually manipulated, represent the colour gradient out from their associated curves. Such explicit and local gradient control is challenging to achieve via the diffusion curve process, introduced in 2008, because it offers only implicit control of the colour gradient. We resolve this problem by using subdivision surfaces that are constructed from shading curves and their shading profiles.
We present a new method for first person sketch-based editing of terrain models. As in usual artistic pictures, the input sketch depicts complex silhouettes with cusps and T-junctions, which typically correspond to non-planar curves in 3D. After analysing depth constraints in the sketch based on perceptual cues, our method best matches the sketched silhouettes with silhouettes or ridges of the input terrain. A deformation algorithm is then applied to the terrain, enabling it to exactly match the sketch from the given perspective view, while insuring that none of the user-defined silhouettes is hidden by another part of the terrain. We extend this sketch-based terrain editing framework to handle a collection of multi-view sketches. As our results show, this method enables users to easily personalize an existing terrain, while preserving its plausibility and style.
We present a patch-based terrain synthesis framework constrained by user-specified curvilinear features such as ridges and valleys. A user specifies where terrain features appear in the generated terrain by providing a 2D sketch map or drawing 2.5D sketched curves in the sketching interface. A novel patch merging technique is proposed to remove boundary artifacts created by overlapping patches.
We present a crowd simulation behavioural model for simulating identified crowd phenomena in a virtual city such as street flows, crowd formation and road crossings. We propose a three-tier architecture model to produce intentions, perform path planning and control movement. We demonstrate that this model produces the desired behaviour associated with pedestrian navigation in a virtual city, which includes navigation, flow formation, circle creation and passageway crossing.