{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
Hello again my fellow HMS enthusiasts, long time no see…or talk…or write / read… you know what I mean. My new article is about integrating one of Huawei’s kits, namely Site Kit in a project using Clean Architecture and MVVM to bring the user a great experience whilst making it easy for the developers to test and maintain the application.
Before starting with the project, we have to dwell into the architecture of the project in order not to get confused later on when checking the separation of the files.
Clean Architecture
The software design behind Clean Architecture aims to separate the design elements such that the organization of the levels is clean and easy to develop, maintain or test, and where the business logic is completely encapsulated.
The design elements are split into circle layers and the most important rule is the outward dependency rule, stating that the inner layers functionalities have no dependency on the outer ones. The Clean Architecture adaption I have chosen to illustrate is the simple app, data, domain layer, outward in.
The domain layer is the inner layer of the architecture, where all the business logic is maintained, or else the core functionality of the code and it is completely encapsulated from the rest of the layers since it tends to not change throughout the development of the code. This layer contains the Entities, Use Cases and Repository Interfaces.
The middle circle or layer is the data, containing Repository Implementations as well and Data Sources and it depends on the Domain layer.
The outer layer is the app layer, or presentation of the application, containing Activities and Fragments modeled by View Models which execute the use cases of the domain layer. It depends on both data and domain layer.
The work flow of the Clean Architecture using MVVM (Model-View-Viewmodel) is given as follows:
The fragments used call certain methods from the Viewmodels.
The Viewmodels execute the Use Cases attached to them.
The Use Case makes use of the data coming from the repositories.
The Repositories return the data from either a local or remote Data Source.
From there the data returns to the User Interface through Mutable Live Data observation so we can display it to the user. Hence we can tell the data goes through the app ring to the data ring and then all the way back down.
Now that we have clarified the Clean Architecture we will be passing shorty to MVVM so as to make everything clearer on the reader.
MVVM Architecture
This is another architecture used with the aim of facilitating the developers work and separating the development of the graphical interface. It consists in Model-View-Viewmodel method which was shortly mentioned in the previous sections.
This software pattern consists in Views, ViewModels and Models (duhhh how did I come up with that?!). The View is basically the user interface, made up of Activities and Fragments supporting a set of use cases and it is connected through DataBinding to the ModelView which serves as a intermediate between the View and the Model, or else between the UI and the back logic to all the use cases and methods called in the UI.
Why did I choose MVVM with Clean Architecture? Because when projects start to increase in size from small to middle or expand to bigger ones, then the separation of responsibilities becomes harder as the codebase grows huge, making the project more error prone thus increasing the difficulty of the development, testing and maintenance.
With these being said, we can now move on to the development of Site Kit using Clean Architecture + MVVM.
Site Kit
Before you are able to integrate Site Kit, you should create a application and perform the necessary configurations by following this post. Afterwards we can start.
Site Kit is a site service offered by Huawei to help users find places and points of interest, including but not limited to the name of the place, location and address. It can also make suggestions using the autocomplete function or make use of the coordinates to give the users written address and time zone. In this scenario, we will search for restaurants based on the type of food they offer, and we include 6 main types such as burger, pizza, taco, kebab, coffee and dessert.
Now since there is no function in Site Kit that allows us to make a search of a point of interest (POI) based on such types, we will instead conduct a text search where the query will be the type of restaurant we have picked. In the UI or View we call this function with the type of food passed as an argument.
Code:
type = args.type.toString()
type?.let { viewModel.getSitesWithKeyword(type,41.0082,28.9784) }
Since we are using MVVM, we will need the ViewModel to call the usecase for us hence in the ViewModel we add the following function and invoke the usecase, to then proceed getting the live data that will come as a response when we move back up in the data flow.
Code:
class SearchInputViewModel @ViewModelInject constructor(
private val getSitesWithKeywordUseCase: GetSitesWithKeywordUseCase
) : BaseViewModel() {
private val _restaurantList = MutableLiveData<ResultData<List<Restaurant>>>()
val restaurantList: LiveData<ResultData<List<Restaurant>>>
get() = _restaurantList
@InternalCoroutinesApi
fun getSitesWithKeyword(keyword: String, latitude: Double, longitude: Double) {
viewModelScope.launch(Dispatchers.IO) {
getSitesWithKeywordUseCase.invoke(keyword, latitude, longitude).collect { it ->
handleTask(it) {
_restaurantList.postValue(it)
}
}
}
}
companion object {
private const val TAG = "SearchInputViewModel"
}
}
After passing the app layer of the onion we will now call the UseCase implemented in the domain side where we inject the Site Repository interface so that the UseCase can make use of the data flowing in from the Repository.
Code:
class GetSitesWithKeywordUseCase @Inject constructor(private val repository: SitesRepository) {
suspend operator fun invoke(keyword:String, lat: Double, lng: Double): Flow<ResultData<List<Restaurant>>> {
return repository.getSitesWithKeyword(keyword,lat,lng)
}
}
Code:
interface SitesRepository {
suspend fun getSitesWithKeyword(keyword:String, lat:Double, lng: Double): Flow<ResultData<List<Restaurant>>>
}
The interface of the Site Repository in the domain actually represents the implemented Site Repository in the data layer which returns data from the remote Sites DataSource using an interface and uses a mapper to map the Site Results to a data class of type Restaurant (since we are getting the data of the Restaurants).
Code:
@InternalCoroutinesApi
class SitesRepositoryImpl @Inject constructor(
private val sitesRemoteDataSource: SitesRemoteDataSource,
private val restaurantMapper: Mapper<Restaurant, Site>
) :
SitesRepository {
override suspend fun getSitesWithKeyword(keyword: String,lat:Double, lng:Double): Flow<ResultData<List<Restaurant>>> =
flow {
emit(ResultData.Loading())
val response = sitesRemoteDataSource.getSitesWithKeyword(keyword,lat,lng)
when (response) {
is SitesResponse.Success -> {
val sites = response.data.sites.orEmpty()
val restaurants = restaurantMapper.mapToEntityList(sites)
emit(ResultData.Success(restaurants))
Log.d(TAG, "ResultData.Success emitted ${restaurants.size}")
}
is SitesResponse.Error -> {
emit(ResultData.Failed(response.errorMessage))
Log.d(TAG, "ResultData.Error emitted ${response.errorMessage}")
}
}
}
companion object {
private const val TAG = "SitesRepositoryImpl"
}
}
The SitesRemoteDataSource interface in fact only serves an an interface for the implementation of the real data source (SitesRemoteDataSourceImpl) and gets the SiteResponse coming from it.
Code:
interface SitesRemoteDataSource {
suspend fun getSitesWithKeyword(keyword:String, lat:Double, lng:Double): SitesResponse<TextSearchResponse>
}
Code:
@ExperimentalCoroutinesApi
class SitesRemoteDataSourceImpl @Inject constructor(private val sitesService: SitesService) :
SitesRemoteDataSource {
override suspend fun getSitesWithKeyword(keyword: String, lat: Double, lng: Double): SitesResponse<TextSearchResponse> {
return sitesService.getSitesByKeyword(keyword,lat,lng)
}
}
We rollback to the Repository in case of any response; in case of success specifically, meaning we got an answer of type Restaurant List from the SiteResponse, we map those results to the data class Restaurant and then keep rolling the data flow back up until we can observe them in the ViewModel through the Mutable Live Data and then display them in the UI Fragment.
Code:
sealed class SitesResponse<T> {
data class Success <T>(val data: T) : SitesResponse<T>()
data class Error<T>(val errorMessage: String , val errorCode:String) : SitesResponse<T>()
}
However, before we start rolling back, in order to even be able to get a SiteResponse, we should implement the framework SiteService where we make the necessary API request, in our case the TextSearchRequest by injecting the Site Kit’s Search Service and inserting the type of food the user chose as a query and Restaurant as a POI type.
Code:
@ExperimentalCoroutinesApi
class SitesService @Inject constructor(private val searchService: SearchService) {
suspend fun getSitesByKeyword(keyword: String, lat: Double, lng: Double) =
suspendCoroutine<SitesResponse<TextSearchResponse>> { continuation ->
val callback = object : SearchResultListener<TextSearchResponse> {
override fun onSearchResult(p0: TextSearchResponse) {
continuation.resume(SitesResponse.Success(data = p0))
Log.d(
TAG,
"SitesResponse.Success ${p0.totalCount} emitted to flow controller"
)
}
override fun onSearchError(p0: SearchStatus) {
continuation.resume(
SitesResponse.Error(
errorCode = p0.errorCode,
errorMessage = p0.errorMessage
)
)
Log.d(TAG, "SitesResponse.Error emitted to flow controller")
}
}
val request = TextSearchRequest()
val locationIstanbul = Coordinate(lat, lng)
request.apply {
query = keyword
location = locationIstanbul
hwPoiType = HwLocationType.RESTAURANT
radius = 1000
pageSize = 20
pageIndex = 1
}
searchService.textSearch(request, callback)
}
companion object {
const val TAG = "SitesService"
}
}
After making the Text Search Request, we get the result from the callback as a SiteResponse and then start the dataflow back up by passing the SiteResponse to the DataSource, from there to the Respository, then to the UseCase and then finally we observe the data live in the ViewModel, to finally display it in the fragment / UI.
For a better understanding of how the whole project is put together I have prepared a small demo showing the flow of the process.
Site Kit with Clean Architecture and MVVM Demo
And that was it, looks complicated but it really is pretty easy once you get the hang of it. Give it a shot!
Tips and Tricks
Tips are important here as all this process might look confusing at a first glance, so what I would suggest is:
Follow the Clean Architecture structure of the project by splitting your files in separate folders according to their function.
Use Coroutines instead of threads since they are faster and lighter to run.
Use dependency injections (Hilt, Dagger) so as to avoid the tedious job of manual dependency injection for every class.
Conclusion
In this article, we got to mention the structure of Clean Architecture and MVVM and their importance when implemented together in medium / big size projects. We moved on in the implementation of Site Kit Service using the aforementioned architectures and explaining the process of it step by step, until we retrieved the final search result. I hope you try it and like it. As always, stay healthy my friends and see you in other articles.
Reference
HMS Site Kit
Clean Architecture
MVVM with Clean Architecture
Sabrina Cara said:
View attachment 5232405
Introduction
Hello again my fellow HMS enthusiasts, long time no see…or talk…or write / read… you know what I mean. My new article is about integrating one of Huawei’s kits, namely Site Kit in a project using Clean Architecture and MVVM to bring the user a great experience whilst making it easy for the developers to test and maintain the application.
Before starting with the project, we have to dwell into the architecture of the project in order not to get confused later on when checking the separation of the files.
Clean Architecture
The software design behind Clean Architecture aims to separate the design elements such that the organization of the levels is clean and easy to develop, maintain or test, and where the business logic is completely encapsulated.
The design elements are split into circle layers and the most important rule is the outward dependency rule, stating that the inner layers functionalities have no dependency on the outer ones. The Clean Architecture adaption I have chosen to illustrate is the simple app, data, domain layer, outward in.
The domain layer is the inner layer of the architecture, where all the business logic is maintained, or else the core functionality of the code and it is completely encapsulated from the rest of the layers since it tends to not change throughout the development of the code. This layer contains the Entities, Use Cases and Repository Interfaces.
The middle circle or layer is the data, containing Repository Implementations as well and Data Sources and it depends on the Domain layer.
The outer layer is the app layer, or presentation of the application, containing Activities and Fragments modeled by View Models which execute the use cases of the domain layer. It depends on both data and domain layer.
The work flow of the Clean Architecture using MVVM (Model-View-Viewmodel) is given as follows:
The fragments used call certain methods from the Viewmodels.
The Viewmodels execute the Use Cases attached to them.
The Use Case makes use of the data coming from the repositories.
The Repositories return the data from either a local or remote Data Source.
From there the data returns to the User Interface through Mutable Live Data observation so we can display it to the user. Hence we can tell the data goes through the app ring to the data ring and then all the way back down.
Now that we have clarified the Clean Architecture we will be passing shorty to MVVM so as to make everything clearer on the reader.
MVVM Architecture
This is another architecture used with the aim of facilitating the developers work and separating the development of the graphical interface. It consists in Model-View-Viewmodel method which was shortly mentioned in the previous sections.
This software pattern consists in Views, ViewModels and Models (duhhh how did I come up with that?!). The View is basically the user interface, made up of Activities and Fragments supporting a set of use cases and it is connected through DataBinding to the ModelView which serves as a intermediate between the View and the Model, or else between the UI and the back logic to all the use cases and methods called in the UI.
Why did I choose MVVM with Clean Architecture? Because when projects start to increase in size from small to middle or expand to bigger ones, then the separation of responsibilities becomes harder as the codebase grows huge, making the project more error prone thus increasing the difficulty of the development, testing and maintenance.
With these being said, we can now move on to the development of Site Kit using Clean Architecture + MVVM.
Site Kit
Before you are able to integrate Site Kit, you should create a application and perform the necessary configurations by following this post. Afterwards we can start.
Site Kit is a site service offered by Huawei to help users find places and points of interest, including but not limited to the name of the place, location and address. It can also make suggestions using the autocomplete function or make use of the coordinates to give the users written address and time zone. In this scenario, we will search for restaurants based on the type of food they offer, and we include 6 main types such as burger, pizza, taco, kebab, coffee and dessert.
Now since there is no function in Site Kit that allows us to make a search of a point of interest (POI) based on such types, we will instead conduct a text search where the query will be the type of restaurant we have picked. In the UI or View we call this function with the type of food passed as an argument.
Code:
type = args.type.toString()
type?.let { viewModel.getSitesWithKeyword(type,41.0082,28.9784) }
Since we are using MVVM, we will need the ViewModel to call the usecase for us hence in the ViewModel we add the following function and invoke the usecase, to then proceed getting the live data that will come as a response when we move back up in the data flow.
Code:
class SearchInputViewModel @ViewModelInject constructor(
private val getSitesWithKeywordUseCase: GetSitesWithKeywordUseCase
) : BaseViewModel() {
private val _restaurantList = MutableLiveData<ResultData<List<Restaurant>>>()
val restaurantList: LiveData<ResultData<List<Restaurant>>>
get() = _restaurantList
@InternalCoroutinesApi
fun getSitesWithKeyword(keyword: String, latitude: Double, longitude: Double) {
viewModelScope.launch(Dispatchers.IO) {
getSitesWithKeywordUseCase.invoke(keyword, latitude, longitude).collect { it ->
handleTask(it) {
_restaurantList.postValue(it)
}
}
}
}
companion object {
private const val TAG = "SearchInputViewModel"
}
}
After passing the app layer of the onion we will now call the UseCase implemented in the domain side where we inject the Site Repository interface so that the UseCase can make use of the data flowing in from the Repository.
Code:
class GetSitesWithKeywordUseCase @Inject constructor(private val repository: SitesRepository) {
suspend operator fun invoke(keyword:String, lat: Double, lng: Double): Flow<ResultData<List<Restaurant>>> {
return repository.getSitesWithKeyword(keyword,lat,lng)
}
}
Code:
interface SitesRepository {
suspend fun getSitesWithKeyword(keyword:String, lat:Double, lng: Double): Flow<ResultData<List<Restaurant>>>
}
The interface of the Site Repository in the domain actually represents the implemented Site Repository in the data layer which returns data from the remote Sites DataSource using an interface and uses a mapper to map the Site Results to a data class of type Restaurant (since we are getting the data of the Restaurants).
Code:
@InternalCoroutinesApi
class SitesRepositoryImpl @Inject constructor(
private val sitesRemoteDataSource: SitesRemoteDataSource,
private val restaurantMapper: Mapper<Restaurant, Site>
) :
SitesRepository {
override suspend fun getSitesWithKeyword(keyword: String,lat:Double, lng:Double): Flow<ResultData<List<Restaurant>>> =
flow {
emit(ResultData.Loading())
val response = sitesRemoteDataSource.getSitesWithKeyword(keyword,lat,lng)
when (response) {
is SitesResponse.Success -> {
val sites = response.data.sites.orEmpty()
val restaurants = restaurantMapper.mapToEntityList(sites)
emit(ResultData.Success(restaurants))
Log.d(TAG, "ResultData.Success emitted ${restaurants.size}")
}
is SitesResponse.Error -> {
emit(ResultData.Failed(response.errorMessage))
Log.d(TAG, "ResultData.Error emitted ${response.errorMessage}")
}
}
}
companion object {
private const val TAG = "SitesRepositoryImpl"
}
}
The SitesRemoteDataSource interface in fact only serves an an interface for the implementation of the real data source (SitesRemoteDataSourceImpl) and gets the SiteResponse coming from it.
Code:
interface SitesRemoteDataSource {
suspend fun getSitesWithKeyword(keyword:String, lat:Double, lng:Double): SitesResponse<TextSearchResponse>
}
Code:
@ExperimentalCoroutinesApi
class SitesRemoteDataSourceImpl @Inject constructor(private val sitesService: SitesService) :
SitesRemoteDataSource {
override suspend fun getSitesWithKeyword(keyword: String, lat: Double, lng: Double): SitesResponse<TextSearchResponse> {
return sitesService.getSitesByKeyword(keyword,lat,lng)
}
}
We rollback to the Repository in case of any response; in case of success specifically, meaning we got an answer of type Restaurant List from the SiteResponse, we map those results to the data class Restaurant and then keep rolling the data flow back up until we can observe them in the ViewModel through the Mutable Live Data and then display them in the UI Fragment.
Code:
sealed class SitesResponse<T> {
data class Success <T>(val data: T) : SitesResponse<T>()
data class Error<T>(val errorMessage: String , val errorCode:String) : SitesResponse<T>()
}
However, before we start rolling back, in order to even be able to get a SiteResponse, we should implement the framework SiteService where we make the necessary API request, in our case the TextSearchRequest by injecting the Site Kit’s Search Service and inserting the type of food the user chose as a query and Restaurant as a POI type.
Code:
@ExperimentalCoroutinesApi
class SitesService @Inject constructor(private val searchService: SearchService) {
suspend fun getSitesByKeyword(keyword: String, lat: Double, lng: Double) =
suspendCoroutine<SitesResponse<TextSearchResponse>> { continuation ->
val callback = object : SearchResultListener<TextSearchResponse> {
override fun onSearchResult(p0: TextSearchResponse) {
continuation.resume(SitesResponse.Success(data = p0))
Log.d(
TAG,
"SitesResponse.Success ${p0.totalCount} emitted to flow controller"
)
}
override fun onSearchError(p0: SearchStatus) {
continuation.resume(
SitesResponse.Error(
errorCode = p0.errorCode,
errorMessage = p0.errorMessage
)
)
Log.d(TAG, "SitesResponse.Error emitted to flow controller")
}
}
val request = TextSearchRequest()
val locationIstanbul = Coordinate(lat, lng)
request.apply {
query = keyword
location = locationIstanbul
hwPoiType = HwLocationType.RESTAURANT
radius = 1000
pageSize = 20
pageIndex = 1
}
searchService.textSearch(request, callback)
}
companion object {
const val TAG = "SitesService"
}
}
After making the Text Search Request, we get the result from the callback as a SiteResponse and then start the dataflow back up by passing the SiteResponse to the DataSource, from there to the Respository, then to the UseCase and then finally we observe the data live in the ViewModel, to finally display it in the fragment / UI.
For a better understanding of how the whole project is put together I have prepared a small demo showing the flow of the process.
Site Kit with Clean Architecture and MVVM Demo
And that was it, looks complicated but it really is pretty easy once you get the hang of it. Give it a shot!
Tips and Tricks
Tips are important here as all this process might look confusing at a first glance, so what I would suggest is:
Follow the Clean Architecture structure of the project by splitting your files in separate folders according to their function.
Use Coroutines instead of threads since they are faster and lighter to run.
Use dependency injections (Hilt, Dagger) so as to avoid the tedious job of manual dependency injection for every class.
Conclusion
In this article, we got to mention the structure of Clean Architecture and MVVM and their importance when implemented together in medium / big size projects. We moved on in the implementation of Site Kit Service using the aforementioned architectures and explaining the process of it step by step, until we retrieved the final search result. I hope you try it and like it. As always, stay healthy my friends and see you in other articles.
Reference
HMS Site Kit
Clean Architecture
MVVM with Clean Architecture
Click to expand...
Click to collapse
Sabrina Cara said:
View attachment 5232405
Introduction
Hello again my fellow HMS enthusiasts, long time no see…or talk…or write / read… you know what I mean. My new article is about integrating one of Huawei’s kits, namely Site Kit in a project using Clean Architecture and MVVM to bring the user a great experience whilst making it easy for the developers to test and maintain the application.
Before starting with the project, we have to dwell into the architecture of the project in order not to get confused later on when checking the separation of the files.
Clean Architecture
The software design behind Clean Architecture aims to separate the design elements such that the organization of the levels is clean and easy to develop, maintain or test, and where the business logic is completely encapsulated.
The design elements are split into circle layers and the most important rule is the outward dependency rule, stating that the inner layers functionalities have no dependency on the outer ones. The Clean Architecture adaption I have chosen to illustrate is the simple app, data, domain layer, outward in.
The domain layer is the inner layer of the architecture, where all the business logic is maintained, or else the core functionality of the code and it is completely encapsulated from the rest of the layers since it tends to not change throughout the development of the code. This layer contains the Entities, Use Cases and Repository Interfaces.
The middle circle or layer is the data, containing Repository Implementations as well and Data Sources and it depends on the Domain layer.
The outer layer is the app layer, or presentation of the application, containing Activities and Fragments modeled by View Models which execute the use cases of the domain layer. It depends on both data and domain layer.
The work flow of the Clean Architecture using MVVM (Model-View-Viewmodel) is given as follows:
The fragments used call certain methods from the Viewmodels.
The Viewmodels execute the Use Cases attached to them.
The Use Case makes use of the data coming from the repositories.
The Repositories return the data from either a local or remote Data Source.
From there the data returns to the User Interface through Mutable Live Data observation so we can display it to the user. Hence we can tell the data goes through the app ring to the data ring and then all the way back down.
Now that we have clarified the Clean Architecture we will be passing shorty to MVVM so as to make everything clearer on the reader.
MVVM Architecture
This is another architecture used with the aim of facilitating the developers work and separating the development of the graphical interface. It consists in Model-View-Viewmodel method which was shortly mentioned in the previous sections.
This software pattern consists in Views, ViewModels and Models (duhhh how did I come up with that?!). The View is basically the user interface, made up of Activities and Fragments supporting a set of use cases and it is connected through DataBinding to the ModelView which serves as a intermediate between the View and the Model, or else between the UI and the back logic to all the use cases and methods called in the UI.
Why did I choose MVVM with Clean Architecture? Because when projects start to increase in size from small to middle or expand to bigger ones, then the separation of responsibilities becomes harder as the codebase grows huge, making the project more error prone thus increasing the difficulty of the development, testing and maintenance.
With these being said, we can now move on to the development of Site Kit using Clean Architecture + MVVM.
Site Kit
Before you are able to integrate Site Kit, you should create a application and perform the necessary configurations by following this post. Afterwards we can start.
Site Kit is a site service offered by Huawei to help users find places and points of interest, including but not limited to the name of the place, location and address. It can also make suggestions using the autocomplete function or make use of the coordinates to give the users written address and time zone. In this scenario, we will search for restaurants based on the type of food they offer, and we include 6 main types such as burger, pizza, taco, kebab, coffee and dessert.
Now since there is no function in Site Kit that allows us to make a search of a point of interest (POI) based on such types, we will instead conduct a text search where the query will be the type of restaurant we have picked. In the UI or View we call this function with the type of food passed as an argument.
Code:
type = args.type.toString()
type?.let { viewModel.getSitesWithKeyword(type,41.0082,28.9784) }
Since we are using MVVM, we will need the ViewModel to call the usecase for us hence in the ViewModel we add the following function and invoke the usecase, to then proceed getting the live data that will come as a response when we move back up in the data flow.
Code:
class SearchInputViewModel @ViewModelInject constructor(
private val getSitesWithKeywordUseCase: GetSitesWithKeywordUseCase
) : BaseViewModel() {
private val _restaurantList = MutableLiveData<ResultData<List<Restaurant>>>()
val restaurantList: LiveData<ResultData<List<Restaurant>>>
get() = _restaurantList
@InternalCoroutinesApi
fun getSitesWithKeyword(keyword: String, latitude: Double, longitude: Double) {
viewModelScope.launch(Dispatchers.IO) {
getSitesWithKeywordUseCase.invoke(keyword, latitude, longitude).collect { it ->
handleTask(it) {
_restaurantList.postValue(it)
}
}
}
}
companion object {
private const val TAG = "SearchInputViewModel"
}
}
After passing the app layer of the onion we will now call the UseCase implemented in the domain side where we inject the Site Repository interface so that the UseCase can make use of the data flowing in from the Repository.
Code:
class GetSitesWithKeywordUseCase @Inject constructor(private val repository: SitesRepository) {
suspend operator fun invoke(keyword:String, lat: Double, lng: Double): Flow<ResultData<List<Restaurant>>> {
return repository.getSitesWithKeyword(keyword,lat,lng)
}
}
Code:
interface SitesRepository {
suspend fun getSitesWithKeyword(keyword:String, lat:Double, lng: Double): Flow<ResultData<List<Restaurant>>>
}
The interface of the Site Repository in the domain actually represents the implemented Site Repository in the data layer which returns data from the remote Sites DataSource using an interface and uses a mapper to map the Site Results to a data class of type Restaurant (since we are getting the data of the Restaurants).
Code:
@InternalCoroutinesApi
class SitesRepositoryImpl @Inject constructor(
private val sitesRemoteDataSource: SitesRemoteDataSource,
private val restaurantMapper: Mapper<Restaurant, Site>
) :
SitesRepository {
override suspend fun getSitesWithKeyword(keyword: String,lat:Double, lng:Double): Flow<ResultData<List<Restaurant>>> =
flow {
emit(ResultData.Loading())
val response = sitesRemoteDataSource.getSitesWithKeyword(keyword,lat,lng)
when (response) {
is SitesResponse.Success -> {
val sites = response.data.sites.orEmpty()
val restaurants = restaurantMapper.mapToEntityList(sites)
emit(ResultData.Success(restaurants))
Log.d(TAG, "ResultData.Success emitted ${restaurants.size}")
}
is SitesResponse.Error -> {
emit(ResultData.Failed(response.errorMessage))
Log.d(TAG, "ResultData.Error emitted ${response.errorMessage}")
}
}
}
companion object {
private const val TAG = "SitesRepositoryImpl"
}
}
The SitesRemoteDataSource interface in fact only serves an an interface for the implementation of the real data source (SitesRemoteDataSourceImpl) and gets the SiteResponse coming from it.
Code:
interface SitesRemoteDataSource {
suspend fun getSitesWithKeyword(keyword:String, lat:Double, lng:Double): SitesResponse<TextSearchResponse>
}
Code:
@ExperimentalCoroutinesApi
class SitesRemoteDataSourceImpl @Inject constructor(private val sitesService: SitesService) :
SitesRemoteDataSource {
override suspend fun getSitesWithKeyword(keyword: String, lat: Double, lng: Double): SitesResponse<TextSearchResponse> {
return sitesService.getSitesByKeyword(keyword,lat,lng)
}
}
We rollback to the Repository in case of any response; in case of success specifically, meaning we got an answer of type Restaurant List from the SiteResponse, we map those results to the data class Restaurant and then keep rolling the data flow back up until we can observe them in the ViewModel through the Mutable Live Data and then display them in the UI Fragment.
Code:
sealed class SitesResponse<T> {
data class Success <T>(val data: T) : SitesResponse<T>()
data class Error<T>(val errorMessage: String , val errorCode:String) : SitesResponse<T>()
}
However, before we start rolling back, in order to even be able to get a SiteResponse, we should implement the framework SiteService where we make the necessary API request, in our case the TextSearchRequest by injecting the Site Kit’s Search Service and inserting the type of food the user chose as a query and Restaurant as a POI type.
Code:
@ExperimentalCoroutinesApi
class SitesService @Inject constructor(private val searchService: SearchService) {
suspend fun getSitesByKeyword(keyword: String, lat: Double, lng: Double) =
suspendCoroutine<SitesResponse<TextSearchResponse>> { continuation ->
val callback = object : SearchResultListener<TextSearchResponse> {
override fun onSearchResult(p0: TextSearchResponse) {
continuation.resume(SitesResponse.Success(data = p0))
Log.d(
TAG,
"SitesResponse.Success ${p0.totalCount} emitted to flow controller"
)
}
override fun onSearchError(p0: SearchStatus) {
continuation.resume(
SitesResponse.Error(
errorCode = p0.errorCode,
errorMessage = p0.errorMessage
)
)
Log.d(TAG, "SitesResponse.Error emitted to flow controller")
}
}
val request = TextSearchRequest()
val locationIstanbul = Coordinate(lat, lng)
request.apply {
query = keyword
location = locationIstanbul
hwPoiType = HwLocationType.RESTAURANT
radius = 1000
pageSize = 20
pageIndex = 1
}
searchService.textSearch(request, callback)
}
companion object {
const val TAG = "SitesService"
}
}
After making the Text Search Request, we get the result from the callback as a SiteResponse and then start the dataflow back up by passing the SiteResponse to the DataSource, from there to the Respository, then to the UseCase and then finally we observe the data live in the ViewModel, to finally display it in the fragment / UI.
For a better understanding of how the whole project is put together I have prepared a small demo showing the flow of the process.
Site Kit with Clean Architecture and MVVM Demo
And that was it, looks complicated but it really is pretty easy once you get the hang of it. Give it a shot!
Tips and Tricks
Tips are important here as all this process might look confusing at a first glance, so what I would suggest is:
Follow the Clean Architecture structure of the project by splitting your files in separate folders according to their function.
Use Coroutines instead of threads since they are faster and lighter to run.
Use dependency injections (Hilt, Dagger) so as to avoid the tedious job of manual dependency injection for every class.
Conclusion
In this article, we got to mention the structure of Clean Architecture and MVVM and their importance when implemented together in medium / big size projects. We moved on in the implementation of Site Kit Service using the aforementioned architectures and explaining the process of it step by step, until we retrieved the final search result. I hope you try it and like it. As always, stay healthy my friends and see you in other articles.
Reference
HMS Site Kit
Clean Architecture
MVVM with Clean Architecture
Click to expand...
Click to collapse
Related
Allow your users the freedom to choose their Android platform providing the same feature
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Some time ago I developed a Word Search game solver Android application using the services from Firebase ML Kit.
Solve WordSearch games with Android and ML Kit
A Kotlin ML Kit Data Structure & Algorithm Story
It was an interesting trip discovering the features of a framework that allows the developer to use AI capabilities without knowing all the rocket science behind.
In the specific, I’ve used the Document recognition feature to try to extract text from a word search game image.
After the text recognition phase, the output was cleaned and arranged into a matrix to be processed by the solver algorithm. This algo tried to look for all the words formed by grouping the letters respecting the rules of the games: contiguous letters in all the straight directions (vertical, horizontal, diagonal)
This app ran well on all the Android devices capable to run the Google Firebase SDK and the Google Mobile Services (GMS).
Since the second half of last year all new Huawei devices cannot run the GMS any more due to government restrictions, you can read more about this here:
[Update 14: Temporary License Extended Again]
Google has revoked Huawei's Android license
www.xda-developers.com
My app was not capable to run on the brand new Huawei devices
So I tried to look for solutions to make this case study app running on the new Huawei terminals.
Let’s follow my journey…
The Discovery of HMS ML Kit
I went throughout the Huawei documentation on HUAWEI Developer--The official site for HUAWEI developers. Provides HUAWEI appgallery service,HUAWEI Mobile Services,AI SDK,VR SDK
Here you can find many SDKs AKA Kits offering a set of smart features to the developers.
I’ve found one offering the features that I was looking for: HMS ML Kit. It is quite similar to the one from Firebase as it allows the developer to use Machine Learning capabilities like Image, Text, Face recognition and so on.
Huawei ML Kit
In particular, for my specific use case, I’ve used the text analyzer capable to run locally and taking advantage of the neural processing using NPU hardware.
Documentation HMS ML Kit Text recognition
Integrating HMS ML Kit was super easy. If you want to give it a try It’s just a matter of adding a dependency in your build.gradle file, enabling the service from the AppGallery web dashboard if you want to use the Cloud API and download the agconnect-services.json configuration file and use it in your app.
You can refer to the official guide here for the needed steps:
Documentation HMS ML Kit
Architectural Approach
My first desire was to maintain and deploy only one apk so I wanted to integrate both the Firebase ML Kit SDK and the HMS ML Kit one.
I thought about the main feature
Decode the image and getting back the text detected together with the bounding boxes surrounding each character to better display the spotted text to the user.
This was defined by this interface
Code:
package com.laquysoft.wordsearchai.textrecognizer
import android.graphics.Bitmap
interface DocumentTextRecognizer {
fun processImage(bitmap: Bitmap, success: (Document) -> Unit, error: (String?) -> Unit)
}
I’ve also defined my own data classes to have a common output format from both services
Code:
data class Symbol(
val text: String?,
val rect: Rect,
val idx: Int = 0,
val length: Int = 0
)
data class Document(val stringValue: String, val count: Int, val symbols: List<Symbol>)
Where Document represents the text result returned by the ML Kit services, it contains a list of Symbol (the character recognized) each one with its own char, the bounding box surrounding it (Rect), and the index in the string detected as both MLKit service will group some chars in a string with a unique bounding box.
Then I’ve created an object capable to instantiate the right service depending which service (HMS or GMS) is running on the device
Code:
object DocumentTextRecognizerService {
private fun getServiceType(context: Context) = when {
isGooglePlayServicesAvailable(
context
) -> ServiceType.GOOGLE
isHuaweiMobileServicesAvailable(
context
) -> ServiceType.HUAWEI
else -> ServiceType.GOOGLE
}
private fun isGooglePlayServicesAvailable(context: Context): Boolean {
return GoogleApiAvailability.getInstance()
.isGooglePlayServicesAvailable(context) == ConnectionResult.SUCCESS
}
private fun isHuaweiMobileServicesAvailable(context: Context): Boolean {
return HuaweiApiAvailability.getInstance()
.isHuaweiMobileServicesAvailable(context) == com.huawei.hms.api.ConnectionResult.SUCCESS
}
fun create(context: Context): DocumentTextRecognizer {
val type =
getServiceType(
context
)
if (type == ServiceType.HUAWEI)
return HMSDocumentTextRecognizer()
return GMSDocumentTextRecognizer()
}
}
This was pretty much all to make it works.
The ViewModel can use the service provided
Code:
class WordSearchAiViewModel(
private val resourceProvider: ResourceProvider,
private val recognizer: DocumentTextRecognizer
) : ViewModel() {
val resultList: MutableLiveData<List<String>> = MutableLiveData()
val resultBoundingBoxes: MutableLiveData<List<Symbol>> = MutableLiveData()
private lateinit var dictionary: List<String>
fun detectDocumentTextIn(bitmap: Bitmap) {
loadDictionary()
recognizer.processImage(bitmap, {
postWordsFound(it)
postBoundingBoxes(it)
},
{
Log.e("WordSearchAIViewModel", it)
})
}
by the right recognizer instantiated when the WordSearchAiViewModel is instantiated as well.
Running the app and choosing a word search game image on a Mate 30 Pro (an HMS device) shows this result
The Recognizer Brothers
You can check the code of the two recognizers below. What they are doing is to use the custom SDK implementation to get the result and adapt it to the interface, you can virtually use any other service capable to do the same.
Code:
package com.laquysoft.wordsearchai.textrecognizer
import android.graphics.Bitmap
import com.google.firebase.ml.vision.FirebaseVision
import com.google.firebase.ml.vision.common.FirebaseVisionImage
class GMSDocumentTextRecognizer : DocumentTextRecognizer {
private val detector = FirebaseVision.getInstance().onDeviceTextRecognizer
override fun processImage(
bitmap: Bitmap,
success: (Document) -> Unit,
error: (String?) -> Unit
) {
val firebaseImage = FirebaseVisionImage.fromBitmap(bitmap)
detector.processImage(firebaseImage)
.addOnSuccessListener { firebaseVisionDocumentText ->
if (firebaseVisionDocumentText != null) {
val words = firebaseVisionDocumentText.textBlocks
.flatMap { it -> it.lines }
.flatMap { it.elements }
val symbols: MutableList<Symbol> = emptyList<Symbol>().toMutableList()
words.forEach {
val rect = it.boundingBox
if (rect != null) {
it.text.forEachIndexed { idx, value ->
symbols.add(
Symbol(
value.toString(),
rect,
idx,
it.text.length
)
)
}
}
}
val document =
Document(
firebaseVisionDocumentText.text,
firebaseVisionDocumentText.textBlocks.size,
symbols
)
success(document)
}
}
.addOnFailureListener { error(it.localizedMessage) }
}
}
Code:
package com.laquysoft.wordsearchai.textrecognizer
import android.graphics.Bitmap
import com.huawei.hms.mlsdk.MLAnalyzerFactory
import com.huawei.hms.mlsdk.common.MLFrame
class HMSDocumentTextRecognizer : DocumentTextRecognizer {
//private val detector = MLAnalyzerFactory.getInstance().remoteDocumentAnalyzer
private val detector = MLAnalyzerFactory.getInstance().localTextAnalyzer
override fun processImage(
bitmap: Bitmap,
success: (Document) -> Unit,
error: (String?) -> Unit
) {
val hmsFrame = MLFrame.fromBitmap(bitmap)
detector.asyncAnalyseFrame(hmsFrame)
.addOnSuccessListener { mlDocument ->
if (mlDocument != null) {
val words = mlDocument.blocks
.flatMap { it.contents }
.flatMap { it.contents }
val symbols: MutableList<Symbol> = emptyList<Symbol>().toMutableList()
words.forEach {
val rect = it.border
it.stringValue.forEachIndexed { idx, value ->
symbols.add(Symbol(
value.toString(),
rect,
idx,
it.stringValue.length
))
}
}
val document =
Document(
mlDocument.stringValue,
mlDocument.blocks.size,
symbols
)
success(document)
}
}
.addOnFailureListener { error(it.localizedMessage) }
}
}
Conclusion
As good Android developers we should develop and deploy our apps in all the platforms our user can reach, love and adopt, without excluding anyone.
We should spend some time trying to give the users the same experience. This is a small sample about it and others will comes in the future.
More articles like this, you can visit HUAWEI Developer Forum and Medium.
Previously on All About Maps: Episode 1:
The principles of clean architecture
The importance of eliminating map provider dependencies with abstraction
Drawing polylines and markers on Mapbox Maps, Google Maps (GMS), and Huawei Maps (HMS)
Episode 2: Bounded Regions
Welcome to the second episode of AllAboutMaps. In order to understand this blog post better, I would first suggest reading the Episode 1. Otherwise, it will be difficult to follow the context.
In this episode we will talk about bounded regions:
The GPX parser datasource will parse the the file to get the list of attraction points (waypoints in this case).
The datasource module will emit the bounded region information in every 3 seconds
A rectangular bounded area from the centered attraction points with a given radius using a utility method (No dependency to any Map Provider!)
We will move the map camera to the bounded region each time a new bounded region is emitted.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
ChangeLog since Episode 1
As we all know, software development is continous process. It helps a lot when you have reviewers who can comment on your code and point out issues or come up with suggestions. Since this project is one person task, it is not always easy to spot the flows in the code duirng implementation. The software gets better and evolves hopefully in a good way when we add new features. Once again. I would like to add the disclaimer that my suggestions here are not silver bullets. There are always better approaches. I am more than happy to hear your suggestions in the comments!
You can see the full code change between episode 1 and 2 here:
https://github.com/ulusoyca/AllAboutMaps/compare/episode_1-parse-gpx...episode_2-bounded-region
Here are the main changes I would like to mention:
1- Added MapLifecycleHandlerFragment.kt base class
In episode 1, I had one feature: show the polyline and markers on the map. The base class of all 3 fragments (RouteInfoMapboxFragment, RouteInfoGoogleFragment and RouteInfoHuaweiFragment) called these lifecycle methods. When I added another feature (showing bounded regions) I realized that the new base class of this feature again implemented the same lifecycle methods. This is against the DRY rule (Dont Repeat Yourself)! Here is the base class I introduced so that each feature's base class will extend this one:
Code:
/**
* The base fragment handles map lifecycle. To use it, the mapview classes should implement
* [AllAboutMapView] interface.
*/
abstract class MapLifecycleHandlerFragment : DaggerFragment() {
protected lateinit var mapView: AllAboutMapView
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
mapView.onMapViewCreate(savedInstanceState)
}
override fun onResume() {
super.onResume()
mapView.onMapViewResume()
}
override fun onPause() {
super.onPause()
mapView.onMapViewPause()
}
override fun onStart() {
super.onStart()
mapView.onMapViewStart()
}
override fun onStop() {
super.onStop()
mapView.onMapViewStop()
}
override fun onDestroyView() {
super.onDestroyView()
mapView.onMapViewDestroy()
}
override fun onSaveInstanceState(outState: Bundle) {
super.onSaveInstanceState(outState)
mapView.onMapViewSaveInstanceState(outState)
}
}
Let's see the big picture now:
2- Refactored the abstraction for styles, marker options, and line options.
In the first episode, we encapsulated a dark map style inside each custom MapView. When I intended to use outdoor map style for the second episode, I realized that my first approach was a mistake. A specific style should not be encapsulated inside MapView. Each feature should be able to select different style. I took the responsibility to load the style from MapViews to fragments. Once the style is loaded, the style object is passed to MapView.
Code:
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
mapView = binding.mapView
super.onViewCreated(view, savedInstanceState)
binding.mapView.getMapAsync { mapboxMap ->
binding.mapView.onMapReady(mapboxMap)
mapboxMap.setStyle(Style.OUTDOORS) {
binding.mapView.onStyleLoaded(it)
onMapStyleLoaded()
}
}
}
I also realized the need for MarkerOptions and LineOptions entities in our domain module:
Code:
data class MarkerOptions(
var latLng: LatLng,
var text: String? = null,
@DrawableRes var iconResId: Int,
var iconMapStyleId: String,
@ColorRes var iconColor: Int,
@ColorRes var textColor: Int
)
Code:
data class LineOptions(
var latLngs: List<LatLng>,
@DimenRes var lineWidth: Int,
@ColorRes var lineColor: Int
)
Above entities has properties based on the needs of my project. I only care about the color, text, location, and icon properties of the marker. For polyline, I will customize width, color and text properties. If your project needs to customize the marker offset, opacity, line join type, and other properties, then feel free to add them in your case.
These entities are mapped to corresponding map provider classes:
LineOptions:
Code:
private fun LineOptions.toGoogleLineOptions(context: Context) = PolylineOptions()
.color(ContextCompat.getColor(context, lineColor))
.width(resources.getDimension(lineWidth))
.addAll(latLngs.map { it.toGoogleLatLng() })
Code:
private fun LineOptions.toHuaweiLineOptions(context: Context) = PolylineOptions()
.color(ContextCompat.getColor(context, lineColor))
.width(resources.getDimension(lineWidth))
.addAll(latLngs.map { it.toHuaweiLatLng() })
Code:
private fun LineOptions.toMapboxLineOptions(context: Context): MapboxLineOptions {
val color = ColorUtils.colorToRgbaString(ContextCompat.getColor(context, lineColor))
return MapboxLineOptions()
.withLineColor(color)
.withLineWidth(resources.getDimension(lineWidth))
.withLatLngs(latLngs.map { it.toMapboxLatLng() })
}
MarkerOptions
Code:
private fun DomainMarkerOptions.toGoogleMarkerOptions(): GoogleMarkerOptions {
var markerOptions = GoogleMarkerOptions()
.icon(BitmapDescriptorFactory.fromResource(iconResId))
.position(latLng.toGoogleLatLng())
markerOptions = text?.let { markerOptions.title(it) } ?: markerOptions
return markerOptions
}
Code:
private fun DomainMarkerOptions.toHuaweiMarkerOptions(context: Context): HuaweiMarkerOptions {
BitmapDescriptorFactory.setContext(context)
var markerOptions = HuaweiMarkerOptions()
.icon(BitmapDescriptorFactory.fromResource(iconResId))
.position(latLng.toHuaweiLatLng())
markerOptions = text?.let { markerOptions.title(it) } ?: markerOptions
return markerOptions
}
Code:
private fun DomainMarkerOptions.toMapboxSymbolOptions(context: Context, style: Style): SymbolOptions {
val drawable = ContextCompat.getDrawable(context, iconResId)
val bitmap = BitmapUtils.getBitmapFromDrawable(drawable)!!
style.addImage(iconMapStyleId, bitmap)
val iconColor = ColorUtils.colorToRgbaString(ContextCompat.getColor(context, iconColor))
val textColor = ColorUtils.colorToRgbaString(ContextCompat.getColor(context, textColor))
var symbolOptions = SymbolOptions()
.withIconImage(iconMapStyleId)
.withLatLng(latLng.toMapboxLatLng())
.withIconColor(iconColor)
.withTextColor(textColor)
symbolOptions = text?.let { symbolOptions.withTextField(it) } ?: symbolOptions
return symbolOptions
}
There are minor technical details to handle the differences between map provider APIs but it is out of this blog post's scope.
Earlier our methods for drawing polyline and marker looked like this:
Code:
fun drawPolyline(latLngs: List<LatLng>, @ColorRes mapLineColor: Int)
fun drawMarker(latLng: LatLng, icon: Bitmap, name: String?)
After this refactor they look like this:
Code:
fun drawPolyline(lineOptions: LineOptions)
fun drawMarker(markerOptions: MarkerOptions)
It is a code smell when the number of the arguments in a method increases when you add a new feature. That's why we created data holders to pass around.
3- A secondary constructor method for LatLng
While working on this feature, I realized that a secondary method that constructs the LatLng entity from double values would also be useful when mapping the entities with different map providers. I mentioned the reason why I use inline classes for Latitude and Longitude in the first episode.
Code:
inline class Latitude(val value: Float)
inline class Longitude(val value: Float)
data class LatLng(
val latitude: Latitude,
val longitude: Longitude
) {
constructor(latitude: Double, longitude: Double) : this(
Latitude(latitude.toFloat()),
Longitude(longitude.toFloat())
)
val latDoubleValue: Double
get() = latitude.value.toDouble()
val lngDoubleValue: Double
get() = longitude.value.toDouble()
}
Bounded Region
A bounded region is used to describe a particular area (in many cases it is rectangular) on a map. We usually need two coordinate pairs to describe a region: Soutwest and Northeast. In this stackoverflow answer (https://stackoverflow.com/a/31029389), it is well described:
As expected Mapbox, GMS and HMS maps provide LatLngBounds classes. However, they require a pair of coordinates to construct the bound. In our case we only have one location for each attraction point. We want to show the region with a radius from center on map. We need to do a little bit extra work to calculate the location pair but first let's add LatLngBound entity to our domain module:
Code:
data class LatLngBounds(
val southwestCorner: LatLng,
val northeastCorner: LatLng
)
Implementation
First, let's see the big (literally!) picture:
Thanks to our clean architecture, it is very easy to add a new feature with a new use case. Let's start with the domain module as always:
Code:
/**
* Emits the list of waypoints with a given update interval
*/
class StartWaypointPlaybackUseCase
@Inject constructor(
private val routeInfoRepository: RouteInfoRepository
) {
suspend operator fun invoke(
points: List<Point>,
updateInterval: Long
): Flow<Point> {
return routeInfoRepository.startWaypointPlayback(points, updateInterval)
}
}
The user interacts with the app to start the playback of waypoints. I call this playback because playback is "the reproduction of previously recorded sounds or moving images." We have a list of points to be listened in a given time. We will move map camera periodically from one bounded region to another. The waypoints are emitted from datasource with a given update interval. Domain module doesn't know the implementation details. It sends the request to our datasource module.
Let's see our datasource module. We added a new method in RouteInfoDataRepository:
Code:
override suspend fun startWaypointPlayback(
points: List<Point>,
updateInterval: Long
): Flow<Point> = flow {
val routeInfo = gpxFileDatasource.parseGpxFile()
routeInfo.wayPoints.forEachIndexed { index, waypoint ->
if (index != 0) {
delay(updateInterval)
}
emit(waypoint)
}
}.flowOn(Dispatchers.Default)
Thanks to Kotlin Coroutines, it is very simple to emit the points with a delay. Roman Elizarov describes the flow api in very neat diagram below. If you are interested to learn more about it, his talks are the best to start with.
Long story short, our app module invokes the use case from domain module, domain module forwards the request to datasource module. The corresponding repository class inside datasource module gets the data from GPX datasource and the datasource module orchestrates the data flow.
For full content, you can visit HUAWEI Developer Forum.
In this article I will talk about HUAWEI Scene Kit. HUAWEI Scene Kit is a lightweight rendering engine that features high performance and low consumption. It provides advanced descriptive APIs for us to edit, operate, and render 3D materials. Scene Kit adopts physically based rendering (PBR) pipelines to achieve realistic rendering effects. With this Kit, we only need to call some APIs to easily load and display complicated 3D objects on Android phones.
It was announced before with just SceneView feature. But, in the Scene Kit SDK 5.0.2.300 version, they have announced Scene Kit with new features FaceView and ARView. With these new features, the Scene Kit has made the integration of Plane Detection and Face Tracking features much easier.
At this stage, the following question may come to your mind “since there are ML Kit and AR Engine, why are we going to use Scene Kit?” Let’s give the answer to this question with an example.
Differences Between Scene Kit and AR Engine or ML Kit:
For example, we have a Shopping application. And let’s assume that our application has a feature in the glasses purchasing part that the user can test the glasses using AR to see how the glasses looks like in real. Here, we do not need to track facial gestures using the Facial expression tracking feature provided by AR Engine. All we have to do is render a 3D object on the user’s eye. Face Tracking is enough for this. So if we used AR Engine, we would have to deal with graphics libraries like OpenGL. But by using the Scene Kit FaceView, we can easily add this feature to our application without dealing with any graphics library. Because the feature here is a basic feature and the Scene Kit provides this to us.
So what distinguishes AR Engine or ML Kit from Scene Kit is AR Engine and ML Kit provide more detailed controls. However, Scene Kit only provides the basic features (I’ll talk about these features later). For this reason, its integration is much simpler.
Let’s examine what these features provide us.SceneView
With SceneView, we are able to load and render 3D materials in common scenes.
It allows us to:
Load and render 3D materials.
Load the cubemap texture of a skybox to make the scene look larger and more impressive than it actually is.
Load lighting maps to mimic real-world lighting conditions through PBR pipelines.
Swipe on the screen to view rendered materials from different angles.
ARView:
ARView uses the plane detection capability of AR Engine, together with the graphics rendering capability of Scene Kit, to provide us with the capability of loading and rendering 3D materials in common AR scenes.
With ARView, we can:
Load and render 3D materials in AR scenes.
Set whether to display the lattice plane (consisting of white lattice points) to help select a plane in a real-world view.
Tap an object placed onto the lattice plane to select it. Once selected, the object will change to red. Then we can move, resize, or rotate it.
FaceView:
FaceView can use the face detection capability provided by ML Kit or AR Engine to dynamically detect faces. Along with the graphics rendering capability of Scene Kit, FaceView provides us with superb AR scene rendering dedicated for faces.
With FaceView we can:
Dynamically detect faces and apply 3D materials to the detected faces.
As I mentioned above ARView uses the plane detection capability of AR Engine and the FaceView uses the face detection capability provided by either ML Kit or AR Engine. When using the FaceView feature, we can use the SDK we want by specifying which SDK to use in the layout.
Here, we should consider the devices to be supported when choosing the SDK. You can see the supported devices in the table below. Also for more detailed information you can visit this page. (In addition to the table on this page, the Scene Kit’s SceneView feature also supports P40 Lite devices.)
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Also, I think it is useful to mention some important working principles of Scene Kit:
Scene Kit
Provides a Full-SDK, which we can integrate into our app to access 3D graphics rendering capabilities, even though our app runs on phones without HMS Core.
Uses the Entity Component System (ECS) to reduce coupling and implement multi-threaded parallel rendering.
Adopts real-time PBR pipelines to make rendered images look like in a real world.
Supports the general-purpose GPU Turbo to significantly reduce power consumption.
Demo App
Let’s learn in more detail by integrating these 3 features of the Scene Kit with a demo application that we will develop in this section.
To configure the Maven repository address for the HMS Core SDK add the below code to project level build.gradle.
Go to
〉 project level build.gradle > buildscript > repositories
〉project level build.gradle > allprojects > repositories
Code:
maven { url 'https://developer.huawei.com/repo/' }
After that go to
module level build.gradle > dependencies
then add build dependencies for the Full-SDK of Scene Kit in the dependencies block.
Code:
implementation 'com.huawei.scenekit:full-sdk:5.0.2.302'
Note: When adding build dependencies, replace the version here “full-sdk: 5.0.2.302” with the latest Full-SDK version. You can find all the SDK and Full-SDK version numbers in Version Change History.
Then click the Sync Now as shown below
After the build is successfully completed, add the following line to the manifest.xml file for Camera permission.
Code:
<uses-permission android:name="android.permission.CAMERA" />
Now our project is ready to development. We can use all the functionalities of Scene Kit.
Let’s say this demo app is a shopping app. And I want to use Scene Kit features in this application. We’ll use the Scene Kit’s ARView feature in the “office” section of our application to test how a plant and a aquarium looks on our desk.
And in the sunglasses section, we’ll use the FaceView feature to test how sunglasses look on our face.
Finally, we will use the SceneView feature in the shoes section of our application. We’ll test a shoe to see how it looks.
We will need materials to test these properties, let’s get these materials first. I will use 3D models that you can download from the links below. You can use the same or different materials if you want.
Capability: ARView, Used Models: Plant , Aquarium
Capability: FaceView, Used Model: Sunglasses
Capability: SceneView, Used Model: Shoe
Note: I used 3D models in “.glb” format as asset in ARView and FaceView features. However, these links I mentioned contain 3D models in “.gltf” format. I converted “.gltf” format files to “.glb” format. Therefore, you can obtain a 3D model in “.glb” format by uploading all the files (textures, scene.bin and scene.gltf) of the 3D models downloaded from these links to an online converter website. You can use any online conversion website for the conversion process.
All materials must be stored in the assets directory. Thus, we place the materials under app> src> main> assets in our project. After placing it, our file structure will be as follows.
After adding the materials, we will start by adding the ARView feature first. Since we assume that there are office supplies in the activity where we will use the ARView feature, let’s create an activity named OfficeActivity and first develop its layout.
Note: Activities must extend the Activity class. Update the activities that extend the AppCompatActivity with Activity”
Example: It should be “OfficeActivity extends Activity”.
ARView
In order to use the ARView feature of the Scene Kit, we add the following ARView code to the layout (activity_office.xml file).
Code:
<com.huawei.hms.scene.sdk.ARView
android:id="@+id/ar_view"
android:layout_width="match_parent"
android:layout_height="match_parent">
</com.huawei.hms.scene.sdk.ARView>
Overview of the activity_office.xml file:
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:gravity="bottom"
tools:context=".OfficeActivity">
<com.huawei.hms.scene.sdk.ARView
android:id="@+id/ar_view"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_centerInParent="true"
android:layout_centerHorizontal="true"
android:layout_centerVertical="true"
android:gravity="bottom"
android:layout_marginBottom="30dp"
android:orientation="horizontal">
<Button
android:id="@+id/button_flower"
android:layout_width="110dp"
android:layout_height="wrap_content"
android:onClick="onButtonFlowerToggleClicked"
android:text="Load Flower"/>
<Button
android:id="@+id/button_aquarium"
android:layout_width="110dp"
android:layout_height="wrap_content"
android:onClick="onButtonAquariumToggleClicked"
android:text="Load Aquarium"/>
</LinearLayout>
</RelativeLayout>
We specified 2 buttons, one for the aquarium and the other for loading a plant. Now, let’s do the initializations from OfficeActivity and activate the ARView feature in our application. First, let’s override the onCreate () function to obtain the ARView and the button that will trigger the code of object loading.
Code:
private ARView mARView;
private Button mButtonFlower;
private boolean isLoadFlowerResource = false;
private boolean isLoadAquariumResource = false;
private Button mButtonAquarium;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_office);
mARView = findViewById(R.id.ar_view);
mButtonFlower = findViewById(R.id.button_flower);
mButtonAquarium = findViewById(R.id.button_aquarium);
Toast.makeText(this, "Please move the mobile phone slowly to find the plane", Toast.LENGTH_LONG).show();
}
Then add the method that will be triggered when the buttons are clicked. Here we will check the loading status of the object. We will clean or load the object according to the its situation.
For plant button:
Code:
public void onButtonFlowerToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadFlowerResource) {
// Load 3D model.
mARView.loadAsset("ARView/flower.glb");
float[] scale = new float[] { 0.15f, 0.15f, 0.15f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.500f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadFlowerResource = true;
mButtonFlower.setText("Clear Flower");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadFlowerResource = false;
mButtonFlower.setText("Load Flower");
}
}
For the aquarium button:
Code:
public void onButtonAquariumToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadAquariumResource) {
// Load 3D model.
mARView.loadAsset("ARView/aquarium.glb");
float[] scale = new float[] { 0.015f, 0.015f, 0.015f };
float[] rotation = new float[] { 0.0f, 0.0f, 0.0f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadAquariumResource = true;
mButtonAquarium.setText("Clear Aquarium");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadAquariumResource = false;
mButtonAquarium.setText("Load Aquarium");
}
}
Now let’s talk about what we do with the codes here, line by line. First, we set the ARView.enablePlaneDisplay() function to true, and if a plane is defined in the real world, the program will appear a lattice plane here.
Code:
mARView.enablePlaneDisplay(true);
Then we check whether the object has been loaded or not. If it is not loaded, we specify the path to the 3D model we selected with the mARView.loadAsset() function and load it. (assets> ARView> flower.glb)
Code:
mARView.loadAsset("ARView/flower.glb");
Then we create and initialize scale and rotation arrays for the starting position. For now, we are entering hardcoded values here. For the future versions, by holding the screen, etc. We can set a starting position.
Note: The Scene Kit ARView feature already allows us to move, adjust the size and change the direction of the object we have created on the screen. For this, we should select the object we created and move our finger on the screen to change the position, size or direction of the object.
Here we can adjust the direction or size of the object by adjusting the rotation and scale values.(These values will be used as parameter of setInitialPose() function)
Note: These values can be changed according to used model. To find the appropriate values, you should try yourself. For details of these values see the document of ARView setInitialPose() function.
Code:
float[] scale = new float[] { 0.15f, 0.15f, 0.15f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.500f, 0.0f };
Then we set the scale and rotation values we created as the starting position.
Code:
mARView.setInitialPose(scale, rotation);
After this process, we set the boolean value to indicate that the object has been created and we update the text of the button.
Code:
isLoadResource = true;
mButton.setText(R.string.btn_text_clear_resource);
If the object is already loaded, we clear the resource and load the empty object so that we remove the object from the screen.
Code:
mARView.clearResource();
mARView.loadAsset("");
Then we set the boolean value again and done by updating the button text.
Code:
isLoadResource = false;
mButton.setText(R.string.btn_text_load);
Finally, we should not forget to override the following methods as in the code to ensure synchronization.
Code:
@Override
protected void onPause() {
super.onPause();
mARView.onPause();
}
@Override
protected void onResume() {
super.onResume();
mARView.onResume();
}
@Override
protected void onDestroy() {
super.onDestroy();
mARView.destroy();
}
The overview of OfficeActivity.java should be as follows.
Code:
import android.app.Activity;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.Toast;
import com.huawei.hms.scene.sdk.ARView;
public class OfficeActivity extends Activity {
private ARView mARView;
private Button mButtonFlower;
private boolean isLoadFlowerResource = false;
private boolean isLoadAquariumResource = false;
private Button mButtonAquarium;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_office);
mARView = findViewById(R.id.ar_view);
mButtonFlower = findViewById(R.id.button_flower);
mButtonAquarium = findViewById(R.id.button_aquarium);
Toast.makeText(this, "Please move the mobile phone slowly to find the plane", Toast.LENGTH_LONG).show();
}
/**
* Synchronously call the onPause() method of the ARView.
*/
@Override
protected void onPause() {
super.onPause();
mARView.onPause();
}
/**
* Synchronously call the onResume() method of the ARView.
*/
@Override
protected void onResume() {
super.onResume();
mARView.onResume();
}
/**
* If quick rebuilding is allowed for the current activity, destroy() of ARView must be invoked synchronously.
*/
@Override
protected void onDestroy() {
super.onDestroy();
mARView.destroy();
}
public void onButtonFlowerToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadFlowerResource) {
// Load 3D model.
mARView.loadAsset("ARView/flower.glb");
float[] scale = new float[] { 0.15f, 0.15f, 0.15f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.500f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadFlowerResource = true;
mButtonFlower.setText("Clear Flower");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadFlowerResource = false;
mButtonFlower.setText("Load Flower");
}
}
public void onButtonAquariumToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadAquariumResource) {
// Load 3D model.
mARView.loadAsset("ARView/aquarium.glb");
float[] scale = new float[] { 0.015f, 0.015f, 0.015f };
float[] rotation = new float[] { 0.0f, 0.0f, 0.0f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadAquariumResource = true;
mButtonAquarium.setText("Clear Aquarium");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadAquariumResource = false;
mButtonAquarium.setText("Load Aquarium");
}
}
}
In this way, we added the ARView feature of Scene Kit to our application. We can now use the ARView feature. Now let’s test the ARView part on a device that supports the Scene Kit ARView feature.
Let’s place plants and aquariums on our table as below and see how it looks.
In order for ARView to recognize the ground, first you need to turn the camera slowly until the plane points you see in the photo appear on the screen. After the plane points appear on the ground, we specify that we will add plants by clicking the load flower button. Then we can add the plant by clicking the point on the screen where we want to add the plant. When we do the same by clicking the aquarium button, we can add an aquarium.
I placed an aquarium and plants on my table. You can test how it looks by placing plants or aquariums on your table or anywhere. You can see how it looks in the photo below.
Note: “Clear Flower” and “Clear Aquarium” buttons will remove the objects we have placed on the screen.
After creating the objects, we select the object we want to move, change its size or direction as you can see in the picture below. Under normal conditions, the color of the selected object will turn into red. (The color of some models doesn’t change. For example, when the aquarium model is selected, the color of the model doesn’t change to red.)
If we want to change the size of the object after selecting it, we can zoom in out by using our two fingers. In the picture above you can see that I changed plants sizes. Also we can move the selected object by dragging it. To change its direction, we can move our two fingers in a circular motion.
More information, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201380649772650451&fid=0101187876626530001
Is Scene Kit free of charge?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Detecting and tracking the user face movements can be very powerful feature that you can develop in your application. With this feature that you can manage and get input requests from the user face movements to execute some intents in the app.
Requirements to use ML Kit:
Android Studio 3.X or later version
JDK 1.8 or later
To able to use HMS ML Kit, you need to integrate HMS Core to your project and also add HMS ML Kit SDK. Add Write External Storage and Camera permissions afterwards.
You can click this link to integrate HMS Core to your project.
After integrating HMS Core, add HMS ML Kitdependencies in build.gradle file in app directory.
Code:
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.5.300'
implementation 'com.huawei.hms:ml-computer-vision-face-3d-model:2.0.5.300'
Add permissions to the Manifest File:
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Development Process:
In this article, we are going to use HMS ML Kit in order to track face movements from the user. In order to use this feature, we are going to use 3D features of ML Kit. We are going to use one activity and a helper class.
XML Structure of Activity:
XML:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<SurfaceView
android:id="@+id/surfaceViewCamera"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
We are going to use a surfaceView to use camera in the mobile phone screen. We are going to add a callback to our surfaceHolderCamera to track when it’s created, changed and destroyed.
Implementing HMS ML Kit Default View:
Java:
private lateinit var mAnalyzer: ML3DFaceAnalyzer
private lateinit var mLensEngine: LensEngine
private lateinit var mFaceAnalyzerTransactor: FaceAnalyzerTransactor
private var surfaceHolderCamera: SurfaceHolder? = null
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, 0)
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == 0 && grantResults.isNotEmpty() && hasPermissions(requiredPermissions))
init()
}
private fun init() {
mAnalyzer = createAnalyzer()
mFaceAnalyzerTransactor = FaceAnalyzerTransactor()
mAnalyzer.setTransactor(mFaceAnalyzerTransactor)
prepareViews()
}
private fun prepareViews() {
surfaceHolderCamera = surfaceViewCamera.holder
surfaceHolderCamera?.addCallback(surfaceHolderCallback)
}
ML3DFaceAnalyzer → A face analyzer, which is used to detect 3D faces.
Lens Engine → A class with the camera initialization, frame obtaining, and logic control functions encapsulated.
FaceAnalyzerTransactor → A class for processing recognition results. This class implements MLAnalyzer.MLTransactor (A general API that needs to be implemented by a detection result processor to process detection results) and uses the transactResult method in this class to obtain the recognition results and implement specific services.
SurfaceHolder → By adding a callback to surface view, we will detect the surface changes like surface changed, created and destroyed. We will use this object callback to handle different situation in the application.
In this code block, we basically request the permissions. If the user has already given his/her permissions in init function we will create a ML3DFaceAnalyzer in order to detect 3D faces. We will handle the surface change sitations with the surfaceHolderCallback functions. Afterwards, to process the results we will create FaceAnalyzerTransactor object and the transactor of the analyzer. After everything is set, we will start to analyze the face of the user by prepareView function.
Java:
private fun createAnalyzer(): ML3DFaceAnalyzer {
val settings = ML3DFaceAnalyzerSetting.Factory()
.setTracingAllowed(true)
.setPerformanceType(ML3DFaceAnalyzerSetting.TYPE_PRECISION)
.create()
return MLAnalyzerFactory.getInstance().get3DFaceAnalyzer(settings)
}
In this code block, we will adjust the settings for our 3DFaceAnalyzer object.
setTracingAllowed → Indicates whether to enable face tracking. Due to we desire to trace the face, we set this value true.
setPerformanceType → There are two preference ways to set this value. Speed/precision preference mode for the analyzer to choose precision or speed priority in performing face detection. We choosed TYPE_PRECISION is this example.
By MLAnalyzerFactory.getInstance().get3DFaceAnalyzer(settings) method we will create a 3DFaceAnalyzer object.
Handling Surface Changes
Java:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(p0: SurfaceHolder, p1: Int, p2: Int, p3: Int) {
mLensEngine = createLensEngine(p1, p2)
mLensEngine.run(p0)
}
override fun surfaceDestroyed(p0: SurfaceHolder) {
mLensEngine.release()
}
override fun surfaceCreated(p0: SurfaceHolder) {
}
}
private fun createLensEngine(width: Int, height: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(this, mAnalyzer)
.applyFps(20F)
.setLensType(LensEngine.FRONT_LENS)
.enableAutomaticFocus(true)
return if (resources.configuration.orientation == Configuration.ORIENTATION_PORTRAIT) {
lensEngineCreator.let {
it.applyDisplayDimension(height, width)
it.create()
}
} else {
lensEngineCreator.let {
it.applyDisplayDimension(width, height)
it.create()
}
}
}
When surface is changed, we will create a LensEngine. We will get the width and height values of the surface in the surfaceChanged function by p0 and p1. When we create LensEngine, we will use these parameters. We created a LensEngine by setting the context and the 3DFaceAnalyzer that we have already created.
applyFps → Sets the preview frame rate (FPS) of a camera. The preview frame rate of a camera depends on the firmware capability of the camera.
setLensType → Sets the camera type, BACK_LENS for rear camera and FRONT_LENS for front camera. In this example, we will use front camera.
enableAutomaticFocus → Enables or disables the automatic focus function for a camera.
surfaceChanged method will be executed whenever the mobile screen orientation changes. We also handled this situation in createLensEngine function. A LensEngine will be created the width and height values of the mobile phone by checking the orientation.
run → Starts LensEngine. During startup, the LensEngine selects an appropriate camera based on the user’s requirements on the frame rate and preview image size, initializes parameters of the selected camera, and starts an analyzer thread to analyze and process frame data.
release → Releases resources occupied by LensEngine. We use this function whenever the surfaceDestroyed method is called.
Analysing Results
Java:
class FaceAnalyzerTransactor: MLTransactor<ML3DFace> {
override fun transactResult(results: MLAnalyzer.Result<ML3DFace>?) {
val items: SparseArray<ML3DFace> = results!!.analyseList
Log.i("FaceOrientation:X ",items.get(0).get3DFaceEulerX().toString())
Log.i("FaceOrientation:Y",items.get(0).get3DFaceEulerY().toString())
Log.i("FaceOrientation:Z",items.get(0).get3DFaceEulerZ().toString())
}
override fun destroy() {
}
}
We created a FaceAnalyzerTransactor object in init method. This object is used to process the results.
Due to it implemented from MLTransactor class, we have to override destroy and transactResult methods. With transactResult method we can obtain the recognition results.MLAnalyzer. It obtains the detection as a result list. Due to there will only one face in this example, we will get the first ML3DFace object of the result list.
ML3DFace represents a 3D face detected. It has features include the width, height, rotation degree, 3D coordinate points, and projection matrix.
get3DFaceEulerX() → 3D face pitch angle. A positive value indicates that the face is looking up, and a negative value indicates that the face is looking down.
get3DFaceEulerY() → 3D face yaw angle. A positive value indicates that the face turns to the right side of the image, and a negative value indicates that the face turns to the left side of the image.
get3DFaceEulerZ() → 3D face roll angle. A positive value indicates a clockwise rotation. A negative value indicates a counter-clockwise rotation.
All above methods results change from -1 to +1 according to the face movement. In this example, I will show the demonstration of get3DFaceEulerY() method.
Not making any head movements
In this example, outputs are between 0.04 and 0.1 when I did not move my head. If the values are between -0.1 to 0.1, the user possibly does not move his/her head or makes small movements.
Changing head direction to right
When I moved my head to the right direction outputs were generally between 0.5 and 1. So it can be said that if the values are bigger than 0.5, the user moved his head to right direction.
Changing head direction to left
When I moved my head to the left direction outputs were generally between -0.5 and -1. So it can be said that if the values are less than -0.5, the user moved his head to left direction.
As you can see, HMS ML Kit can detect the user face and track face movements successfully. This project and codes can be accessible from the github link in references area.
You can also implement other ML Kit features like Text-related services, Language/Voice-related services, Language/Voice-related services, Image-related services, Face/Body-related services and Natural Language Processing services.
References:
ML Kit
ML Kit Documentation
Face Detection With ML Kit
Github
Very interesting feature.
Rebis said:
Very interesting feature.
Click to expand...
Click to collapse
I can't wait to use it.
Hi, this service will work on server side?
What is the accuracy rate in low light ?
Which permissions are required?
Background
Around half a year ago I decided to start decorating my new house. Before getting started, I did lots of research on a variety of different topics relating to interior decoration, such as how to choose a consistent color scheme, which measurements to make and how to make them, and how to choose the right furniture. However, my preparations made me realize that no matter how well prepared you are, you're always going to run into many unexpected challenges. Before rushing to the furniture store, I listed all the different pieces of furniture that I wanted to place in my living room, including a sofa, tea table, potted plants, dining table, and carpet, and determined the expected dimensions, colors, and styles of these various items of furniture. However, when I finally got to the furniture store, the dizzying variety of choices had me confused, and I found it very difficult to imagine how the different choices of furniture would actually look like in actual living room. At that moment a thought came to my mind: wouldn't it be great if there was an app that allows users to upload images of their home and then freely select different furniture products to see how they'll look like in their home? Such an app would surely save users wishing to decorate their home lots of time and unnecessary trouble, and reduce the risks of users being dissatisfied with the final decoration result.
That's when the idea of developing an app by myself came to my mind. My initial idea was to design an app that people could use to help them quickly satisfy their home decoration needs by allowing them see what furniture would look like in their homes. The basic way the app works is that users first upload one or multiple images of a room they want to decorate, and then set a reference parameter, such as the distance between the floor and the ceiling. Armed with this information, the app would then automatically calculate the parameters of other areas in the room. Then, users can upload images of furniture they like into a virtual shopping cart. When uploading such images, users need to specify the dimensions of the furniture. From the editing screen, users can drag and drop furniture from the shopping cart onto the image of the room to preview the effect. But then a problem arises: images of furniture dragged and dropped into the room look pasted on and do not blend naturally with their surroundings.
By a stroke of luck, I happened to discover HMS Core AR Engine when looking for a solution for the aforementioned problem. This development kit provides the ability to integrate virtual objects realistically into the real world, which is exactly what my app needs. With its plane detection capability, my app will be able to detect the real planes in a home and allow users to place virtual furniture based on these planes; and with its hit test capability, users can interact with virtual furniture to change their position and orientation in a natural manner.
Next, I'd like to briefly introduce the two capabilities this development kit offers.
AR Engine tracks the illumination, planes, images, objects, surfaces, and other environmental information, to allow apps to integrate virtual objects into the physical world and look and behave like they would if they were real. Its plane detection capability identifies feature points in groups on horizontal and vertical planes, as well as the boundaries of the planes, ensuring that your app can place virtual objects on them.
In addition, the kit continuously tracks the location and orientation of devices relative to their surrounding environment, and establishes a unified geometric space between the virtual world and the physical world. The kit uses its hit test capability to map a point of interest that users tap on the screen to a point of interest in the real environment, from where a ray will be emitted pointing to the location of the device camera, and return the intersecting point between the ray and the plane. In this way, users can interact with any virtual object on their device screen.
Functions and Features
Plane detection: Both horizontal and vertical planes are supported.
Accuracy: The margin of error is around 2.5 cm when the target plane is 1 m away from the camera.
Texture recognition delay: < 1s
Supports polygon fitting and plane merging.
Demo
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hit test
As shown in the demo, the app is able to identify the floor plane, so that the virtual suitcase can move over it as if it were real.
Developing Plane Detection
1. Create a WorldActivity object. This example demonstrates how to use the world AR scenario of AR Engine.
Code:
Public class WorldActivity extends BaseActivity{
Protected void onCreate (Bundle saveInstanceState) {
Initialize DisplayRotationManager.
mDisplayRotationManager = new DisplayRotationManager(this);
Initialize WorldRenderManager.
mWorldRenderManager = new WorldRenderManager(this,this);
}
// Create a gesture processor.
Private void initGestureDetector(){
mGestureDetector = new GestureDetector(this,new GestureDetector.SimpleOnGestureListener()){
}
}
mSurfaceView.setOnTouchListener(new View.OnTouchListener()){
public Boolean onTouch(View v,MotionEvent event){
return mGestureDetector.onTouchEvent(event);
}
}
// Create ARWorldTrackingConfig in the onResume lifecycle.
protected void onResume(){
mArSession = new ARSession(this.getApplicationContext());
mConfig = new ARWorldTrackingConfig(mArSession);
…
}
// Initialize a refresh configuration class.
private void refreshConfig(int lightingMode){
// Set the focus.
mConfig.setFocusMode(ARConfigBase.FocusMode.AUTO_FOCUS);
mArSession.configure(mConfig);
}
}
2. Initialize the WorldRenderManager class, which manages rendering related to world scenarios, including label rendering and virtual object rendering.
Code:
public class WorldRenderManager implements GLSurfaceView.Renderr{
// Initialize a class for frame drawing.
Public void onDrawFrame(GL10 unused){
// Set the openGL textureId for storing the camera preview stream data.
mSession.setCameraTextureName(mTextureDisplay.getExternalTextureId());
// Update the calculation result of AR Engine. You are advised to call this API when your app needs to obtain the latest data.
ARFrame arFrame = mSession.update();
// Obtains the camera specifications of the current frame.
ARCamera arCamera = arFrame.getCamera();
// Returns a projection matrix used for coordinate calculation, which can be used for the transformation from the camera coordinate system to the clip coordinate system.
arCamera.getProjectionMatrix(projectionMatrix, PROJ_MATRIX_OFFSET, PROJ_MATRIX_NEAR, PROJ_MATRIX_FAR);
Session.getAllTrackables(ARPlane.class)
...
}
}
3. Initialize the VirtualObject class, which provides properties of the virtual object and the necessary methods for rendering the virtual object.
Code:
Public class VirtualObject{
}
4. Initialize the ObjectDisplay class to draw virtual objects based on specified parameters.
Code:
Public class ObjectDisplay{
}
Developing Hit Test
1. Initialize the WorldRenderManager class, which manages rendering related to world scenarios, including label rendering and virtual object rendering.
Code:
public class WorldRenderManager implementsGLSurfaceView.Renderer{
// Pass the context.
public WorldRenderManager(Activity activity, Context context) {
mActivity = activity;
mContext = context;
…
}
// Set ARSession, which updates and obtains the latest data in OnDrawFrame.
public void setArSession(ARSession arSession) {
if (arSession == null) {
LogUtil.error(TAG, "setSession error, arSession is null!");
return;
}
mSession = arSession;
}
// Set ARWorldTrackingConfig to obtain the configuration mode.
public void setArWorldTrackingConfig(ARWorldTrackingConfig arConfig) {
if (arConfig == null) {
LogUtil.error(TAG, "setArWorldTrackingConfig error, arConfig is null!");
return;
}
mArWorldTrackingConfig = arConfig;
}
// Implement the onDrawFrame() method.
@Override
public void onDrawFrame(GL10 unused) {
mSession.setCameraTextureName(mTextureDisplay.getExternalTextureId());
ARFrame arFrame = mSession.update();
ARCamera arCamera = arFrame.getCamera();
...
}
// Output the hit result.
private ARHitResult hitTest4Result(ARFrame frame, ARCamera camera, MotionEvent event) {
ARHitResult hitResult = null;
List<ARHitResult> hitTestResults = frame.hitTest(event);
// Determine whether the hit point is within the plane polygon.
ARHitResult hitResultTemp = hitTestResults.get(i);
if (hitResultTemp == null) {
continue;
}
ARTrackable trackable = hitResultTemp.getTrackable();
// Determine whether the point cloud is tapped and whether the point faces the camera.
boolean isPointHitJudge = trackable instanceof ARPoint
&& ((ARPoint) trackable).getOrientationMode() == ARPoint.OrientationMode.ESTIMATED_SURFACE_NORMAL;
// Select points on the plane preferentially.
if (isPlanHitJudge || isPointHitJudge) {
hitResult = hitResultTemp;
if (trackable instanceof ARPlane) {
break;
}
}
return hitResult;
}
}
2. Create a WorldActivity object. This example demonstrates how to use the world AR scenario of AR Engine.
Code:
public class WorldActivity extends BaseActivity {
private ARSession mArSession;
private GLSurfaceView mSurfaceView;
private ARWorldTrackingConfig mConfig;
@Override
protected void onCreate(Bundle savedInstanceState) {
LogUtil.info(TAG, "onCreate");
super.onCreate(savedInstanceState);
setContentView(R.layout.world_java_activity_main);
mWorldRenderManager = new WorldRenderManager(this, this);
mWorldRenderManager.setDisplayRotationManage(mDisplayRotationManager);
mWorldRenderManager.setQueuedSingleTaps(mQueuedSingleTaps)
}
@Override
protected void onResume() {
if (!PermissionManager.hasPermission(this)) {
this.finish();
}
errorMessage = null;
if (mArSession == null) {
try {
if (!arEngineAbilityCheck()) {
finish();
return;
}
mArSession = new ARSession(this.getApplicationContext());
mConfig = new ARWorldTrackingConfig(mArSession);
refreshConfig(ARConfigBase.LIGHT_MODE_ENVIRONMENT_LIGHTING | ARConfigBase.LIGHT_MODE_ENVIRONMENT_TEXTURE);
} catch (Exception capturedException) {
setMessageWhenError(capturedException);
}
if (errorMessage != null) {
stopArSession();
return;
}
}
@Override
protected void onPause() {
LogUtil.info(TAG, "onPause start.");
super.onPause();
if (mArSession != null) {
mDisplayRotationManager.unregisterDisplayListener();
mSurfaceView.onPause();
mArSession.pause();
}
LogUtil.info(TAG, "onPause end.");
}
@Override
protected void onDestroy() {
LogUtil.info(TAG, "onDestroy start.");
if (mArSession != null) {
mArSession.stop();
mArSession = null;
}
if (mWorldRenderManager != null) {
mWorldRenderManager.releaseARAnchor();
}
super.onDestroy();
LogUtil.info(TAG, "onDestroy end.");
}
...
}
Summary
If you've ever done any interior decorating, I'm sure you've wanted the ability to see what furniture would look like in your home without having to purchase them first. After all, most furniture isn't cheap and delivery and assembly can be quite a hassle. That's why apps that allow users to place and view virtual furniture in their real homes are truly life-changing technologies. HMS Core AR Engine can help greatly streamline the development of such apps. With its plane detection and hit test capabilities, the development kit enables your app to accurately detect planes in the real world, and then blend virtual objects naturally into the real world. In addition to virtual home decoration, this powerful kit also has a broad range of other applications. For example, you can leverage its capabilities to develop an AR video game, an AR-based teaching app that allows students to view historical artifacts in 3D, or an e-commerce app with a virtual try-on feature. Try AR Engine now and explore the unlimited possibilities it provides.
Reference
AR Engine Development Guide