Manage Training Samples
After creating the training set, you will see an empty sample management and image upload interface that guides you to upload training sample images.
If you are training a specific portrait model, you can upload about 20 clear headshot images of the same person from different angles. If you are training for a specific style, you may need more images.
The system will process the uploaded images into 512*512 with the content centered for training. If you feel that the training samples are not enough, you can continue to upload.
The sample management is similar to the file manager on your computer, where you can select images by clicking or dragging. You can search for images using the labels as keywords in the search box.
After selecting images, the label editing panel will appear on the right side, where you can add labels to the selected images. The system will automatically add some labels to each image during the preprocessing, and you can supplement or modify them.
The right panel displays labels for selected images. Highlighted labels are shared by all (intersection), while non-highlighted are owned by some (union). If only one image is selected, only its labels are shown.
You can use the dropdown menu on the right side of each label to modify the application scope, or the input box below to add labels.
Click the save button to save all sample images and the current label status. Model training depends on the source model, and since most source models are trained in English, using other languages may affect the results. However, in some cases, this feature can also be used in reverse, using untrained language vocabulary to refer to a certain concept for generating specific image content.
Once the system detects non-English labels, it will ask if you want to enable automatic translation. In most cases, it is recommended by default.
Label management is the most important thing during model training, directly affecting the quality of training results.
Labels are used to associate images with textual information during the training process. AI strengthens the relationship between labels and corresponding feature elements in sample images, generating image content by inputting specific keywords.
When labeling, try to describe the content, texture, and style presented in each sample image as accurately and completely as possible.
For example, during portrait model training, you can label each image with a specific person to strengthen the association. This makes it easier to generate character images using the label.
To achieve a similar effect as portrait models for style models, you can label all samples with the same style.