Hubspot integration

20.0 GBP

20.0 GBP peopleperhour 技术与编程 海外
4小时前

详细信息

HubSpot integration project: Integrate HubSpot with a website or application, including tasks such as uploading data, organizing contacts, creating workflows, and setting up tracking. The freelancer should have experience with HubSpot and be able to work efficiently to complete the tasks within a limited timeframe. This work will initially take just a few hours but may lead to more work.

免责声明

该外包需求信息来源于站外平台,本站仅提供公开信息部分字段展示与订阅服务,更多请查看免责声明

关注公众号,不定期副业成功案例分享
关注公众号

不定期副业成功案例分享

领先一步获取最新的外包任务吗?

立即订阅

类似推荐

We are seeking a skilled Unreal Engine developer to create a highly realistic AI companion for care home residents. This AI companion will function as a lifelike, human avatar integrated within a 32"/40" touchscreen table device, acting as a permanent assistant and companion to residents of care settings. The avatar will be visually represented as a middle-aged, well-spoken British lady, designed to offer interactive support, entertainment, and routine tracking. The successful developer will be responsible for building the following features: Key Project Requirements: Lifelike Avatar Creation: Create a highly realistic human avatar using Unreal Engine’s MetaHuman Creator. The avatar should be a middle-aged woman with a warm, approachable demeanor and a British accent. Include detailed facial animations (smiling, frowning, blinking) and emotional expressions (happy, calm, concerned) for natural interactions. Facial Animation and Lip-Sync: Implement lip-syncing using NVIDIA Omniverse Audio2Face or Unreal Engine’s built-in lip-syncing tools to match the avatar's mouth movements with the voice output. Ensure smooth, real-time facial animations triggered by user interactions. Text-to-Speech Integration: Integrate a Text-to-Speech (TTS) engine such as Google Cloud TTS or Amazon Polly to provide the avatar with a realistic British voice. The avatar must be able to speak naturally based on AI-generated text responses. Conversational AI: Integrate OpenAI GPT-4 or similar NLP models for conversational capabilities. The AI must be able to process and respond to user commands and conversations in real-time. Ensure that the AI learns and remembers the user’s name, preferences, and routines. Routine Learning and Notifications: The AI companion should recognize user routines (e.g., waking up, going to bed, favorite activities) and adjust suggestions accordingly. The avatar should notify care home staff if any unusual behaviour or emergencies (like falls) are detected. Mood Detection (Optional): Implement mood detection using facial recognition or other techniques to adapt the avatar’s expressions and tone to the user’s emotional state. Camera, Speaker, and Microphone Sync: Ensure the AI can fully sync with the Able Table’s built-in camera, microphone, and speakers to enable voice interactions and visual monitoring. The microphone will capture the resident’s voice for commands, which the AI will process and respond to. The camera will detect movements (for potential fall detection) and optionally recognize facial expressions to assess the resident’s mood. The speakers will provide clear, high-quality voice feedback and responses from the AI. Entertainment and App Control: The AI should be able to navigate and control Android apps like YouTube, Wikipedia, and Google Earth to provide entertainment or relevant content based on the user’s interests. Packaging for Android: Once the Unreal Engine project is completed, package the project as an Android APK that will run on a 32"/40" touchscreen Android device. Optimise the app for performance on an Android tablet, ensuring smooth operation, particularly for facial animations, voice control, and app navigation. Deliverables: Fully functional Unreal Engine project with the lifelike avatar and all core features. Android APK file ready for deployment on our 32"/40" touchscreen devices. Documentation on how to manage and update the app, as well as any specific configurations needed for optimal performance on Android. Timeline: The project should be completed within 4-6 weeks. Milestones will be set for avatar design, AI integration, and Android packaging. Required Skills: Expertise in Unreal Engine and MetaHuman Creator. Experience with Text-to-Speech and Natural Language Processing (NLP). Familiarity with NVIDIA Omniverse Audio2Face or similar lip-syncing tools. Experience in Android app development and packaging Unreal Engine projects for Android. Strong understanding of machine learning (optional but preferred for mood and routine learning features). This is phase one and we are looking to work with someone to develop this further
400.0 GBP 技术与编程 peopleperhour 海外
10小时前