Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
Published:
It is 2.02 am right now and my sleepy has not come yet so I think it is a good chance to have some words. The fact that after the 1st covid vaccine dose in the middle of June, I feel much harder to go to sleep, not sure if the vaccine is the main reason or it comes from my other problem. Anyway, with that purpose, today’s blog is not a technical paper review or some algorithms implementation, this post is about my thoughts atm.
Published:
Flow up to my brief introduction about video dialogue of the previous blog, today, I will into detail with one of these state-of-the-art approaches in this topic. The paper I wanna introduce is Structured Co-reference Graph Attention for Video-grounded Dialogue, Junyeong et al. published at AAAI 2021. On a high level, the authors proposed a bipartite co-reference structure to connect the information over multiple modalities (visual, linguistic), and then capture information from the complex spatial as well as the temporal dynamics of video via an attention graph. By representing underlying dependencies between modalities, this design has moved 1 step forward in the reasoning over language and visual.
Published:
Historically, having a system which can discuss as well as interact with you about the football matches/ movies.. with its own knowledge has been considered a very ambitious goal. More than the current AI Visual Model nowsaday, that system must be able to infer video from the past, describe the present, and predict the future. In other words, our system’s capacity must be enough reproduce human intelligent level in video understanding.
Published:
For the older blogs, please visit my page at Viblo (unfortunately, all was written in Vietnamese). I had written all of those blogs while I had been starting to learn about AI.
Hanoi, Oct 20, 2019
Hanoi, Feb 20, 2020
Published:
We made an app which automatically finds multiple similar clothing items from the large scaled database. The algorithm paper was published at a machine learning conference.
Published:
We made a self-driving car, brought it to the FPT Digital Race 2018-2019 contest, and got the 2nd Prize (2/200 teams).
Published:
We made a lively, visual and practical model of “3D virtual assistant”. Suntana brings a realistic experience to users, it can be personalized and specialized for certain tasks. For example: Welcome interviewees, meeting room booking .
Published:
A bokeh effect is used in photography to highlight the most significant parts of an image and blur less important element. Using computer vision technology, we are responsible on making Portrait Mode for Vsmart phone’s Camera.
Published:
A social network for sharing productivity. A tool for spliting your ambitious target into daily tasks.
Published:
This is a project inspired by imissmycafe.com since I miss all of my coffee :( You should put your cup of coffee beside, choose your favorite starbucks view, adjust the environment sound, and start your energy working day!
Published in SoICT 2018, 2018
Published in SoICT 2019, 2019
Published in ICCV 2021, 2021
Published in ECCV 2022, 2022
Published:
This is my first talk in public ever. The topic in about modern recommender systems, why we need RS, how we collect data from user on our platform, how we analyze it and improve our model.
Published:
This is the panel discuss where I talked with leaders of the other AI teams in Vietnam about “How can your business benefit from AI?”. Big thanks to Sun Asterisk for giving me this amazing chance.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.