Javascript must be enabled to continue!
In-Memory Caching for Enhancing Subgraph Accessibility
View through CrossRef
Graphs have been utilized in various fields because of the development of social media and mobile devices. Various studies have also been conducted on caching techniques to reduce input and output costs when processing a large amount of graph data. In this paper, we propose a two-level caching scheme that considers the past usage pattern of subgraphs and graph connectivity, which are features of graph topology. The proposed caching is divided into a used cache and a prefetched cache to manage previously used subgraphs and subgraphs that will be used in the future. When the memory is full, a strategy that replaces a subgraph inside the memory with a new subgraph is needed. Subgraphs in the used cache are managed by a time-to-live (TTL) value, and subgraphs with a low TTL value are targeted for replacement. Subgraphs in the prefetched cache are managed by the queue structure. Thus, first-in subgraphs are targeted for replacement as a priority. When a cache hit occurs in the prefetched cache, the subgraphs are migrated and managed in the used cache. As a result of the performance evaluation, the proposed scheme takes into account subgraph usage patterns and graph connectivity, thus improving cache hit rates and data access speeds compared to conventional techniques. The proposed scheme can quickly process and analyze large graph queries in a computing environment with small memory. The proposed scheme can be used to speed up in-memory-based processing in applications where relationships between objects are complex, such as the Internet of Things and social networks.
Title: In-Memory Caching for Enhancing Subgraph Accessibility
Description:
Graphs have been utilized in various fields because of the development of social media and mobile devices.
Various studies have also been conducted on caching techniques to reduce input and output costs when processing a large amount of graph data.
In this paper, we propose a two-level caching scheme that considers the past usage pattern of subgraphs and graph connectivity, which are features of graph topology.
The proposed caching is divided into a used cache and a prefetched cache to manage previously used subgraphs and subgraphs that will be used in the future.
When the memory is full, a strategy that replaces a subgraph inside the memory with a new subgraph is needed.
Subgraphs in the used cache are managed by a time-to-live (TTL) value, and subgraphs with a low TTL value are targeted for replacement.
Subgraphs in the prefetched cache are managed by the queue structure.
Thus, first-in subgraphs are targeted for replacement as a priority.
When a cache hit occurs in the prefetched cache, the subgraphs are migrated and managed in the used cache.
As a result of the performance evaluation, the proposed scheme takes into account subgraph usage patterns and graph connectivity, thus improving cache hit rates and data access speeds compared to conventional techniques.
The proposed scheme can quickly process and analyze large graph queries in a computing environment with small memory.
The proposed scheme can be used to speed up in-memory-based processing in applications where relationships between objects are complex, such as the Internet of Things and social networks.
Related Results
Towards Intelligent Zone-Based Content Pre-Caching Approach in VANET for Congestion Control
Towards Intelligent Zone-Based Content Pre-Caching Approach in VANET for Congestion Control
In vehicular ad hoc networks (VANETs), content pre-caching is a significant technology that improves network performance and lowers network response delay. VANET faces network cong...
Optimal Video Caching at The Edge of Network by Using Machine Learning
Optimal Video Caching at The Edge of Network by Using Machine Learning
Abstract
Efficiently managing network resources in the dynamic field of video-on-demand (VoD) services is a significant challenge. This requires creative methods to optimiz...
Joint caching and sleeping optimisation for D2D‐aided ultra‐dense network
Joint caching and sleeping optimisation for D2D‐aided ultra‐dense network
Device‐to‐device (D2D) communication provides the communication of the users in the vicinity and thereby decreases end‐to‐end delay and power consumption. More importantly, D2D com...
RMBCC: A Replica Migration-Based Cooperative Caching Scheme for Information-Centric Networks
RMBCC: A Replica Migration-Based Cooperative Caching Scheme for Information-Centric Networks
How to maximize the advantages of in-network caching under limited cache space has always been a key issue in information-centric networking (ICN). Replica placement strategies aim...
Prototype Implementation of a Proxy Caching System for Streaming Media Objects
Prototype Implementation of a Proxy Caching System for Streaming Media Objects
Existing techniques for caching Web objects are not appropriate for the multimedia streaming service. In this paper, the authors focus on the proxy caching problem specifically for...
Cooperative caching game based on social trust for D2D communication networks
Cooperative caching game based on social trust for D2D communication networks
SummaryContent sharing via device‐to‐device (D2D) communications has become a promising method to increase system throughput and reduce traffic load. Due to the characteristic of s...
ON EDGE CACHING IN SATELLITE — IOT NETWORKS
ON EDGE CACHING IN SATELLITE — IOT NETWORKS
<div>The implementation of the Internet of Things (IoT) is mostly done through cellular networks which do not cover the whole world. In addition, the explosive growth of glob...
On Edge Caching in Satellite–IoT Networks
On Edge Caching in Satellite–IoT Networks
<div>The implementation of the Internet of Things (IoT) is mostly done through cellular networks which do not cover the whole world. In addition, the explosive growth of glob...


