By continuing to use this website, you agree Timedora can store cookies on your device and disclose information in accordance with our Cookie Policy. By exiting this window, default cookies will be accepted. To reject cookies, select an option from below. We use cookies for analytics and marketing. You can accept all, reject all, or manage your preferences.
When you chat with DoraBot, she will create a set of nodes and connect them into a workflow to produce the results you want.
The nodes are the building blocks of the workflow and each node represents a particular task or operation that needs to be performed to create the final media.
Each node can take in inputs, perform a specific operation and produce outputs that can be used as inputs for other nodes in the workflow.
By connecting different nodes together, DoraBot can create complex workflows that can generate amazing videos, images, audio and PDFs based on your instructions.
By knowing the different types of nodes available, you can better understand how DoraBot creates the media you want and also give more specific instructions to DoraBot to create the media you have in mind. You can also tweak the workflow and the nodes in the workflow after DoraBot creates them to further customize the media to your liking.
Here are a list of the types of nodes that DoraBot can create
Create generative AI images. Using a prompt, you can specify the content of the image, the style of the image, the placement of objects in the image and so on. You can also specify the model to use for generating the image. Some models are better at understanding complex prompts while some models are better at generating beautiful images. You can also choose the aspect ratio of the image to generate. This node also allows you to edit an existing image by providing an image as input and specifying the changes you want to make to the image in the prompt. For example, you can provide an image of a product and ask to change the background to a beach or add a person next to the product.
Automatically search a stock library for relevant images using keywords. This is a great way to find relevant images without using AI units. You can specify the keywords to search for in the prompt and the node will automatically search the stock library and return relevant images. You can also specify the type of image you want, for example, "photo", "illustration", "icon" and so on.
Create generative AI videos. You can specify the content of the video, the style of the video, the placement of objects in the video and so on using a prompt. You can also specify the model to use for generating the video. Some models are better at understanding complex prompts while some models are better at generating beautiful videos. You can also specify the aspect ratio of the video to generate. This node also allows you to control the video by specifying the starting scene and the ending scene of the video. For example, if you want a video that starts with a beach scene and ends with a sunset scene, you can specify that by providing an image or video (if using video, it will use the end of the that video as the starting scene) of a beach and an image or video (if video, it will use the start of that video) of a sunset as the ending scene. The node will then generate a video that starts with the beach scene and ends with the sunset scene while making sure the content in between is relevant to the prompt you provided. This is especially useful for videos that have a clear progression or storyline. Of course, you can use the 'AI Image' node to create the starting and ending scenes or provide your own images or videos as input for the starting and ending scenes. Alternatively, you can also use reference images to specify the style of the video or as a reference for the content in the video. For example, if you want a video of a cat playing piano in the style of Van Gogh, you can provide an image of Van Gogh's painting as a reference image to specify the style of the video. You can also provide an image of a cat playing piano as a reference image to specify the content of the video. The more reference images you provide, the better the video will be. You can provide up to 5 reference images for a video.
Automatically searches a stock library for relevant videos using keywords. This is a great way to find relevant videos without using AI units. You can specify the keywords to search for in the prompt and the node will automatically search the stock library and return relevant videos. You can also specify the type of video you want, for example, "drone footage", "timelapse", "slow motion" and so on.
This is one of the most powerful nodes in Timedora. This node takes in multiple images, videos, PDFs, audio files, transcriptions and composes them together into a single image, video, PDF or audio file. By asking DoraBot to create a workflow that uses this node, you can create complex media with multiple components. For example, if you want to create a video that has an AI generated video of a cat playing piano in the center, a title at the top and background music, you can use this node to compose all of those components together into a single video. You can specify the layout of the components in the video as well as the timing of when each component appears in the video. For example, you can specify that the title appears at the beginning of the video for 5 seconds and then disappears while the AI generated video of the cat playing piano starts and plays for 20 seconds while the background music starts playing and continues until the end of the video. In addition, this node can also do animations, text effects, subtitling and more.
Create AI generated audio speech. You can customize the voice and the tone of the voice to use. The node will generate an audio file of the generated speech in any language. This is a great way to create narration for your videos or audio for your podcasts. You can also use this node to create voiceovers for your videos. However, if you want to create a video with an AI avatar speaking, you can directly use the 'Avatar Video' node instead as that node will automatically synchronize the lip movements of the avatar with the generated speech.
Create AI generated music. You can specify the type of music to create, even specify the lyrics to use, or create instrumental music if no lyrics are provided. This is a great way to create background music for your videos or music for your podcasts. You can also use this node to create custom music for your videos that fits the theme and style of the video. For example, if you are creating a video about space exploration, you can use this node to create a futuristic sounding music to use as the background music for the video.
Automatically searches a stock library for relevant music using keywords. The music can then be used as background music for your videos or as music for your podcasts. You can specify the keywords to search for in the prompt and the node will automatically search the stock library and return relevant music. You can also specify the type of music you want, for example, "pop", "rock", "classical" and so on.
Specify an existing video/audio/image to use as input to the workflow. This is useful when you want to use your own media as input for the workflow or when you want to use a particular stock media that is not found by the 'Stock Video' or 'Stock Image' node. You can provide a link to the media or upload the media directly to the node. For example, if you have a video of a product that you want to use as the starting scene for an AI generated video, you can use this node to provide that video as input for the workflow. You can also use this node to provide an image of a product that you want to use as a reference image for an AI generated video.
Extracts a particular frame from a video. You can specify which frame to extract, for example the first frame, last frame or a random frame. The extracted frame can then be used as an image input for other nodes in the workflow. For example, you can extract a frame from a video and use that as the starting scene for an AI generated video or use it as a reference image for the style of an AI generated video. You can also use this node to extract frames from a video to create a storyboard or to create a thumbnail for the video.
Generate a video using an AI avatar that can speak the provided script. You can choose from a variety of pre-designed avatars or create a custom avatar. This is a great way to create engaging videos with a human touch. You can specify the voice and tone of the avatar's speech as well as the style of the avatar's appearance. The node will automatically synchronize the lip movements of the avatar with the generated speech to create a realistic speaking avatar video. You can also specify the background of the video by providing an image or video as input for the background. For example, if you want an avatar video with a beach background, you can provide an image or video of a beach as input for the background and the node will automatically use that as the background for the avatar video.
Generate custom sound effects using AI. You can describe the sound effect you want in the prompt and the node will generate the sound effect for you. For example, if you want a sound effect of a door creaking open, you can simply describe that in the prompt and the node will generate a sound effect of a door creaking open for you.
Takes the audio from a video or audio file and extracts out a particular component such as vocals or background music. You can specify which component to extract using a prompt, for example, "vocals only" or "background music only".