<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Startup on Jungbae Park</title><link>https://jungbaepark.github.io/blog/tags/startup/</link><description>Recent content in Startup on Jungbae Park</description><generator>Hugo -- 0.152.2</generator><language>en-us</language><lastBuildDate>Fri, 15 May 2020 00:00:00 +0000</lastBuildDate><atom:link href="https://jungbaepark.github.io/blog/tags/startup/index.xml" rel="self" type="application/rss+xml"/><item><title>Emotional Text-to-Speech and Voice Conversion Systems</title><link>https://jungbaepark.github.io/blog/projects/emotional-tts-humelo/</link><pubDate>Fri, 15 May 2020 00:00:00 +0000</pubDate><guid>https://jungbaepark.github.io/blog/projects/emotional-tts-humelo/</guid><description>Led development of duration-controllable TTS and emotional voice conversion at Humelo, producing two ICASSP publications (2019 Oral 1st author, 2020). Won Minister of Science and ICT Special Award at K-Startup 2018.</description></item><item><title>Government &amp; Public R&amp;D Grant Management</title><link>https://jungbaepark.github.io/blog/projects/government-rd-grants/</link><pubDate>Sun, 15 Mar 2020 00:00:00 +0000</pubDate><guid>https://jungbaepark.github.io/blog/projects/government-rd-grants/</guid><description>Secured ~US$875K across three competitive Korean government R&amp;amp;D grants (IITP, TIPS, Seoul R&amp;amp;BD) at Humelo, covering brain-inspired AI, emotional TTS, and voice conversion research.</description></item><item><title>Polyphonic Sound Event Detection with Transfer Learning</title><link>https://jungbaepark.github.io/blog/projects/sound-event-detection-transfer-learning/</link><pubDate>Mon, 15 Apr 2019 00:00:00 +0000</pubDate><guid>https://jungbaepark.github.io/blog/projects/sound-event-detection-transfer-learning/</guid><description>Developed convolutional bidirectional LSTM with synthetic data-based transfer learning for polyphonic sound event detection at Humelo, achieving +28.4% F1 improvement. Published at ICASSP 2019 as corresponding author.</description></item><item><title>AI Music Composition and SM Entertainment Collaboration</title><link>https://jungbaepark.github.io/blog/projects/ai-music-composition-sxsw/</link><pubDate>Fri, 15 Mar 2019 00:00:00 +0000</pubDate><guid>https://jungbaepark.github.io/blog/projects/ai-music-composition-sxsw/</guid><description>Led AI music composition and rap synthesis at Humelo, presented at SXSW 2019, collaborated with SM Entertainment and rapper Sleepy (KBS Documentary), and received coverage from 10+ national media outlets.</description></item><item><title>Speech Emotion Recognition &amp; Classification System</title><link>https://jungbaepark.github.io/blog/projects/speech-emotion-recognition/</link><pubDate>Fri, 01 Feb 2019 00:00:00 +0000</pubDate><guid>https://jungbaepark.github.io/blog/projects/speech-emotion-recognition/</guid><description>Built a multi-class speech emotion recognition system at Humelo using SpeechCNN and CRNN architectures with MFCC/Mel-spectrogram features, integrated into the Emotional TTS pipeline.</description></item></channel></rss>