All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] Add update_mmu_tlb_range() to simplify code
@ 2024-05-06 15:51 ` Bang Li
  0 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

This series of commits mainly adds the update_mmu_tlb_range() to
batch update tlb in an address range.

After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
multi-size THP"), We may need to batch update tlb of a certain address
range by calling update_mmu_tlb() in a loop. Using the
update_mmu_tlb_range(), we can simplify the code and possibly reduce the
execution of some unnecessary code in some architectures.

Thanks,
Bang

Changes since v1 [1]:
 - Add __HAVE_ARCH_UPDATE_MMU_TLB_RANGE macro (per Lance Yang)

[1] https://lore.kernel.org/linux-mm/20240429103346.59115-1-libang.li@antgroup.com/

Bang Li (5):
  LoongArch: Add update_mmu_tlb_range()
  mips: Add update_mmu_tlb_range()
  riscv: Add update_mmu_tlb_range()
  xtensa: Add update_mmu_tlb_range()
  mm: Add update_mmu_tlb_range()

 arch/loongarch/include/asm/pgtable.h | 4 ++++
 arch/mips/include/asm/pgtable.h      | 4 ++++
 arch/riscv/include/asm/pgtable.h     | 4 ++++
 arch/xtensa/include/asm/pgtable.h    | 4 ++++
 arch/xtensa/mm/tlb.c                 | 6 ++++++
 include/linux/pgtable.h              | 8 ++++++++
 mm/memory.c                          | 4 +---
 7 files changed, 31 insertions(+), 3 deletions(-)

-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 0/5] Add update_mmu_tlb_range() to simplify code
@ 2024-05-06 15:51 ` Bang Li
  0 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

This series of commits mainly adds the update_mmu_tlb_range() to
batch update tlb in an address range.

After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
multi-size THP"), We may need to batch update tlb of a certain address
range by calling update_mmu_tlb() in a loop. Using the
update_mmu_tlb_range(), we can simplify the code and possibly reduce the
execution of some unnecessary code in some architectures.

Thanks,
Bang

Changes since v1 [1]:
 - Add __HAVE_ARCH_UPDATE_MMU_TLB_RANGE macro (per Lance Yang)

[1] https://lore.kernel.org/linux-mm/20240429103346.59115-1-libang.li@antgroup.com/

Bang Li (5):
  LoongArch: Add update_mmu_tlb_range()
  mips: Add update_mmu_tlb_range()
  riscv: Add update_mmu_tlb_range()
  xtensa: Add update_mmu_tlb_range()
  mm: Add update_mmu_tlb_range()

 arch/loongarch/include/asm/pgtable.h | 4 ++++
 arch/mips/include/asm/pgtable.h      | 4 ++++
 arch/riscv/include/asm/pgtable.h     | 4 ++++
 arch/xtensa/include/asm/pgtable.h    | 4 ++++
 arch/xtensa/mm/tlb.c                 | 6 ++++++
 include/linux/pgtable.h              | 8 ++++++++
 mm/memory.c                          | 4 +---
 7 files changed, 31 insertions(+), 3 deletions(-)

-- 
2.19.1.6.gb485710b


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 1/5] LoongArch: Add update_mmu_tlb_range()
  2024-05-06 15:51 ` Bang Li
@ 2024-05-06 15:51   ` Bang Li
  -1 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

Added update_mmu_tlb_range function, we can batch update tlb of an
address range.

Signed-off-by: Bang Li <libang.li@antgroup.com>
---
 arch/loongarch/include/asm/pgtable.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
index af3acdf3481a..924b6b031f06 100644
--- a/arch/loongarch/include/asm/pgtable.h
+++ b/arch/loongarch/include/asm/pgtable.h
@@ -470,6 +470,10 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
 #define __HAVE_ARCH_UPDATE_MMU_TLB
 #define update_mmu_tlb	update_mmu_cache
 
+#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+#define update_mmu_tlb_range(vma, addr, ptep, nr) \
+	update_mmu_cache_range(NULL, vma, addr, ptep, nr)
+
 static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
 			unsigned long address, pmd_t *pmdp)
 {
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 1/5] LoongArch: Add update_mmu_tlb_range()
@ 2024-05-06 15:51   ` Bang Li
  0 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

Added update_mmu_tlb_range function, we can batch update tlb of an
address range.

Signed-off-by: Bang Li <libang.li@antgroup.com>
---
 arch/loongarch/include/asm/pgtable.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
index af3acdf3481a..924b6b031f06 100644
--- a/arch/loongarch/include/asm/pgtable.h
+++ b/arch/loongarch/include/asm/pgtable.h
@@ -470,6 +470,10 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
 #define __HAVE_ARCH_UPDATE_MMU_TLB
 #define update_mmu_tlb	update_mmu_cache
 
+#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+#define update_mmu_tlb_range(vma, addr, ptep, nr) \
+	update_mmu_cache_range(NULL, vma, addr, ptep, nr)
+
 static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
 			unsigned long address, pmd_t *pmdp)
 {
-- 
2.19.1.6.gb485710b


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 2/5] mips: Add update_mmu_tlb_range()
  2024-05-06 15:51 ` Bang Li
@ 2024-05-06 15:51   ` Bang Li
  -1 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

Added update_mmu_tlb_range function, we can batch update tlb of an
address range.

Signed-off-by: Bang Li <libang.li@antgroup.com>
---
 arch/mips/include/asm/pgtable.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index e27a4c83c548..9416c9b971e5 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -597,6 +597,10 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
 #define	__HAVE_ARCH_UPDATE_MMU_TLB
 #define update_mmu_tlb	update_mmu_cache
 
+#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+#define update_mmu_tlb_range(vma, address, ptep, nr) \
+	update_mmu_cache_range(NULL, vma, address, ptep, nr)
+
 static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
 	unsigned long address, pmd_t *pmdp)
 {
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 2/5] mips: Add update_mmu_tlb_range()
@ 2024-05-06 15:51   ` Bang Li
  0 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

Added update_mmu_tlb_range function, we can batch update tlb of an
address range.

Signed-off-by: Bang Li <libang.li@antgroup.com>
---
 arch/mips/include/asm/pgtable.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index e27a4c83c548..9416c9b971e5 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -597,6 +597,10 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
 #define	__HAVE_ARCH_UPDATE_MMU_TLB
 #define update_mmu_tlb	update_mmu_cache
 
+#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+#define update_mmu_tlb_range(vma, address, ptep, nr) \
+	update_mmu_cache_range(NULL, vma, address, ptep, nr)
+
 static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
 	unsigned long address, pmd_t *pmdp)
 {
-- 
2.19.1.6.gb485710b


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 3/5] riscv: Add update_mmu_tlb_range()
  2024-05-06 15:51 ` Bang Li
@ 2024-05-06 15:51   ` Bang Li
  -1 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

Added update_mmu_tlb_range function, we can batch update tlb of an
address range.

Signed-off-by: Bang Li <libang.li@antgroup.com>
---
 arch/riscv/include/asm/pgtable.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 661b2b4fe758..f784c6dd2c66 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -489,6 +489,10 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
 #define __HAVE_ARCH_UPDATE_MMU_TLB
 #define update_mmu_tlb update_mmu_cache
 
+#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+#define update_mmu_tlb_range(vma, addr, ptep, nr) \
+	update_mmu_cache_range(NULL, vma, addr, ptep, nr)
+
 static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
 		unsigned long address, pmd_t *pmdp)
 {
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 3/5] riscv: Add update_mmu_tlb_range()
@ 2024-05-06 15:51   ` Bang Li
  0 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

Added update_mmu_tlb_range function, we can batch update tlb of an
address range.

Signed-off-by: Bang Li <libang.li@antgroup.com>
---
 arch/riscv/include/asm/pgtable.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 661b2b4fe758..f784c6dd2c66 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -489,6 +489,10 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
 #define __HAVE_ARCH_UPDATE_MMU_TLB
 #define update_mmu_tlb update_mmu_cache
 
+#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+#define update_mmu_tlb_range(vma, addr, ptep, nr) \
+	update_mmu_cache_range(NULL, vma, addr, ptep, nr)
+
 static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
 		unsigned long address, pmd_t *pmdp)
 {
-- 
2.19.1.6.gb485710b


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 4/5] xtensa: Add update_mmu_tlb_range()
  2024-05-06 15:51 ` Bang Li
@ 2024-05-06 15:51   ` Bang Li
  -1 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

Added update_mmu_tlb_range function, we can batch update tlb of an
address range.

Signed-off-by: Bang Li <libang.li@antgroup.com>
---
 arch/xtensa/include/asm/pgtable.h | 4 ++++
 arch/xtensa/mm/tlb.c              | 6 ++++++
 2 files changed, 10 insertions(+)

diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h
index 9a7e5e57ee9a..57f97e7e06d0 100644
--- a/arch/xtensa/include/asm/pgtable.h
+++ b/arch/xtensa/include/asm/pgtable.h
@@ -414,6 +414,10 @@ void update_mmu_tlb(struct vm_area_struct *vma,
 		    unsigned long address, pte_t *ptep);
 #define __HAVE_ARCH_UPDATE_MMU_TLB
 
+void update_mmu_tlb_range(struct vm_area_struct *vma,
+			unsigned long address, pte_t *ptep, unsigned int nr);
+#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+
 #endif /* !defined (__ASSEMBLY__) */
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
diff --git a/arch/xtensa/mm/tlb.c b/arch/xtensa/mm/tlb.c
index d8b60d6e50a8..05efba86b870 100644
--- a/arch/xtensa/mm/tlb.c
+++ b/arch/xtensa/mm/tlb.c
@@ -169,6 +169,12 @@ void update_mmu_tlb(struct vm_area_struct *vma,
 	local_flush_tlb_page(vma, address);
 }
 
+void update_mmu_tlb_range(struct vm_area_struct *vma,
+			unsigned long address, pte_t *ptep, unsigned int nr)
+{
+	local_flush_tlb_range(vma, address, address + PAGE_SIZE * nr);
+}
+
 #ifdef CONFIG_DEBUG_TLB_SANITY
 
 static unsigned get_pte_for_vaddr(unsigned vaddr)
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 4/5] xtensa: Add update_mmu_tlb_range()
@ 2024-05-06 15:51   ` Bang Li
  0 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

Added update_mmu_tlb_range function, we can batch update tlb of an
address range.

Signed-off-by: Bang Li <libang.li@antgroup.com>
---
 arch/xtensa/include/asm/pgtable.h | 4 ++++
 arch/xtensa/mm/tlb.c              | 6 ++++++
 2 files changed, 10 insertions(+)

diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h
index 9a7e5e57ee9a..57f97e7e06d0 100644
--- a/arch/xtensa/include/asm/pgtable.h
+++ b/arch/xtensa/include/asm/pgtable.h
@@ -414,6 +414,10 @@ void update_mmu_tlb(struct vm_area_struct *vma,
 		    unsigned long address, pte_t *ptep);
 #define __HAVE_ARCH_UPDATE_MMU_TLB
 
+void update_mmu_tlb_range(struct vm_area_struct *vma,
+			unsigned long address, pte_t *ptep, unsigned int nr);
+#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+
 #endif /* !defined (__ASSEMBLY__) */
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
diff --git a/arch/xtensa/mm/tlb.c b/arch/xtensa/mm/tlb.c
index d8b60d6e50a8..05efba86b870 100644
--- a/arch/xtensa/mm/tlb.c
+++ b/arch/xtensa/mm/tlb.c
@@ -169,6 +169,12 @@ void update_mmu_tlb(struct vm_area_struct *vma,
 	local_flush_tlb_page(vma, address);
 }
 
+void update_mmu_tlb_range(struct vm_area_struct *vma,
+			unsigned long address, pte_t *ptep, unsigned int nr)
+{
+	local_flush_tlb_range(vma, address, address + PAGE_SIZE * nr);
+}
+
 #ifdef CONFIG_DEBUG_TLB_SANITY
 
 static unsigned get_pte_for_vaddr(unsigned vaddr)
-- 
2.19.1.6.gb485710b


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
  2024-05-06 15:51 ` Bang Li
@ 2024-05-06 15:51   ` Bang Li
  -1 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
multi-size THP"), it may need to batch update tlb of an address range
through the update_mmu_tlb function. We can simplify this operation by
adding the update_mmu_tlb_range function, which may also reduce the
execution of some unnecessary code in some architectures.

Signed-off-by: Bang Li <libang.li@antgroup.com>
---
 include/linux/pgtable.h | 8 ++++++++
 mm/memory.c             | 4 +---
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 18019f037bae..869bfe6054f1 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
 #define __HAVE_ARCH_UPDATE_MMU_TLB
 #endif
 
+#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
+				unsigned long address, pte_t *ptep, unsigned int nr)
+{
+}
+#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+#endif
+
 /*
  * Some architectures may be able to avoid expensive synchronization
  * primitives when modifications are made to PTE's which are already
diff --git a/mm/memory.c b/mm/memory.c
index eea6e4984eae..2d53e29cf76e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
 	vm_fault_t ret = 0;
 	int nr_pages = 1;
 	pte_t entry;
-	int i;
 
 	/* File mapping without ->vm_ops ? */
 	if (vma->vm_flags & VM_SHARED)
@@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
 		update_mmu_tlb(vma, addr, vmf->pte);
 		goto release;
 	} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
-		for (i = 0; i < nr_pages; i++)
-			update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
+		update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
 		goto release;
 	}
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
@ 2024-05-06 15:51   ` Bang Li
  0 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-06 15:51 UTC (permalink / raw)
  To: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux, Bang Li

After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
multi-size THP"), it may need to batch update tlb of an address range
through the update_mmu_tlb function. We can simplify this operation by
adding the update_mmu_tlb_range function, which may also reduce the
execution of some unnecessary code in some architectures.

Signed-off-by: Bang Li <libang.li@antgroup.com>
---
 include/linux/pgtable.h | 8 ++++++++
 mm/memory.c             | 4 +---
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 18019f037bae..869bfe6054f1 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
 #define __HAVE_ARCH_UPDATE_MMU_TLB
 #endif
 
+#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
+				unsigned long address, pte_t *ptep, unsigned int nr)
+{
+}
+#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
+#endif
+
 /*
  * Some architectures may be able to avoid expensive synchronization
  * primitives when modifications are made to PTE's which are already
diff --git a/mm/memory.c b/mm/memory.c
index eea6e4984eae..2d53e29cf76e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
 	vm_fault_t ret = 0;
 	int nr_pages = 1;
 	pte_t entry;
-	int i;
 
 	/* File mapping without ->vm_ops ? */
 	if (vma->vm_flags & VM_SHARED)
@@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
 		update_mmu_tlb(vma, addr, vmf->pte);
 		goto release;
 	} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
-		for (i = 0; i < nr_pages; i++)
-			update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
+		update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
 		goto release;
 	}
 
-- 
2.19.1.6.gb485710b


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
  2024-05-06 15:51   ` Bang Li
@ 2024-05-06 16:07     ` Lance Yang
  -1 siblings, 0 replies; 26+ messages in thread
From: Lance Yang @ 2024-05-06 16:07 UTC (permalink / raw)
  To: Bang Li
  Cc: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris,
	jcmvbkbc, linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, libang.linux

On Mon, May 6, 2024 at 11:52 PM Bang Li <libang.li@antgroup.com> wrote:
>
> After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
> multi-size THP"), it may need to batch update tlb of an address range
> through the update_mmu_tlb function. We can simplify this operation by
> adding the update_mmu_tlb_range function, which may also reduce the
> execution of some unnecessary code in some architectures.
>
> Signed-off-by: Bang Li <libang.li@antgroup.com>
> ---
>  include/linux/pgtable.h | 8 ++++++++
>  mm/memory.c             | 4 +---
>  2 files changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 18019f037bae..869bfe6054f1 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
>  #define __HAVE_ARCH_UPDATE_MMU_TLB
>  #endif
>
> +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE

IIRC, the contemporary practice is to define a macro with the same name
as the function if it is being overridden.

Thanks,
Lance

> +static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
> +                               unsigned long address, pte_t *ptep, unsigned int nr)
> +{
> +}
> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> +#endif
> +
>  /*
>   * Some architectures may be able to avoid expensive synchronization
>   * primitives when modifications are made to PTE's which are already
> diff --git a/mm/memory.c b/mm/memory.c
> index eea6e4984eae..2d53e29cf76e 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>         vm_fault_t ret = 0;
>         int nr_pages = 1;
>         pte_t entry;
> -       int i;
>
>         /* File mapping without ->vm_ops ? */
>         if (vma->vm_flags & VM_SHARED)
> @@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>                 update_mmu_tlb(vma, addr, vmf->pte);
>                 goto release;
>         } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> -               for (i = 0; i < nr_pages; i++)
> -                       update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
> +               update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
>                 goto release;
>         }
>
> --
> 2.19.1.6.gb485710b
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
@ 2024-05-06 16:07     ` Lance Yang
  0 siblings, 0 replies; 26+ messages in thread
From: Lance Yang @ 2024-05-06 16:07 UTC (permalink / raw)
  To: Bang Li
  Cc: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris,
	jcmvbkbc, linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, libang.linux

On Mon, May 6, 2024 at 11:52 PM Bang Li <libang.li@antgroup.com> wrote:
>
> After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
> multi-size THP"), it may need to batch update tlb of an address range
> through the update_mmu_tlb function. We can simplify this operation by
> adding the update_mmu_tlb_range function, which may also reduce the
> execution of some unnecessary code in some architectures.
>
> Signed-off-by: Bang Li <libang.li@antgroup.com>
> ---
>  include/linux/pgtable.h | 8 ++++++++
>  mm/memory.c             | 4 +---
>  2 files changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 18019f037bae..869bfe6054f1 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
>  #define __HAVE_ARCH_UPDATE_MMU_TLB
>  #endif
>
> +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE

IIRC, the contemporary practice is to define a macro with the same name
as the function if it is being overridden.

Thanks,
Lance

> +static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
> +                               unsigned long address, pte_t *ptep, unsigned int nr)
> +{
> +}
> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> +#endif
> +
>  /*
>   * Some architectures may be able to avoid expensive synchronization
>   * primitives when modifications are made to PTE's which are already
> diff --git a/mm/memory.c b/mm/memory.c
> index eea6e4984eae..2d53e29cf76e 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>         vm_fault_t ret = 0;
>         int nr_pages = 1;
>         pte_t entry;
> -       int i;
>
>         /* File mapping without ->vm_ops ? */
>         if (vma->vm_flags & VM_SHARED)
> @@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>                 update_mmu_tlb(vma, addr, vmf->pte);
>                 goto release;
>         } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> -               for (i = 0; i < nr_pages; i++)
> -                       update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
> +               update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
>                 goto release;
>         }
>
> --
> 2.19.1.6.gb485710b
>

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
  2024-05-06 16:07     ` Lance Yang
@ 2024-05-07  3:26       ` Bang Li
  -1 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-07  3:26 UTC (permalink / raw)
  To: Lance Yang
  Cc: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris,
	jcmvbkbc, linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, libang.linux

Hey Lance,

Thanks for taking time to review!

On 2024/5/7 0:07, Lance Yang wrote:
> On Mon, May 6, 2024 at 11:52 PM Bang Li <libang.li@antgroup.com> wrote:
>>
>> After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
>> multi-size THP"), it may need to batch update tlb of an address range
>> through the update_mmu_tlb function. We can simplify this operation by
>> adding the update_mmu_tlb_range function, which may also reduce the
>> execution of some unnecessary code in some architectures.
>>
>> Signed-off-by: Bang Li <libang.li@antgroup.com>
>> ---
>>   include/linux/pgtable.h | 8 ++++++++
>>   mm/memory.c             | 4 +---
>>   2 files changed, 9 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>> index 18019f037bae..869bfe6054f1 100644
>> --- a/include/linux/pgtable.h
>> +++ b/include/linux/pgtable.h
>> @@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
>>   #define __HAVE_ARCH_UPDATE_MMU_TLB
>>   #endif
>>
>> +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> 
> IIRC, the contemporary practice is to define a macro with the same name
> as the function if it is being overridden.

The macro __HAVE_ARCH_UPDATE_MMU_TLB_RANGE defined here is aligned with
the macro __HAVE_ARCH_UPDATE_MMU_TLB corresponding to the update_mmu_tlb
function. IMO, it should be better to use my method in this case.

Thanks,
Bang

> 
> Thanks,
> Lance
> 
>> +static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
>> +                               unsigned long address, pte_t *ptep, unsigned int nr)
>> +{
>> +}
>> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
>> +#endif
>> +
>>   /*
>>    * Some architectures may be able to avoid expensive synchronization
>>    * primitives when modifications are made to PTE's which are already
>> diff --git a/mm/memory.c b/mm/memory.c
>> index eea6e4984eae..2d53e29cf76e 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>>          vm_fault_t ret = 0;
>>          int nr_pages = 1;
>>          pte_t entry;
>> -       int i;
>>
>>          /* File mapping without ->vm_ops ? */
>>          if (vma->vm_flags & VM_SHARED)
>> @@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>>                  update_mmu_tlb(vma, addr, vmf->pte);
>>                  goto release;
>>          } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
>> -               for (i = 0; i < nr_pages; i++)
>> -                       update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
>> +               update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
>>                  goto release;
>>          }
>>
>> --
>> 2.19.1.6.gb485710b
>>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
@ 2024-05-07  3:26       ` Bang Li
  0 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-07  3:26 UTC (permalink / raw)
  To: Lance Yang
  Cc: akpm, chenhuacai, tsbogend, paul.walmsley, palmer, chris,
	jcmvbkbc, linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, libang.linux

Hey Lance,

Thanks for taking time to review!

On 2024/5/7 0:07, Lance Yang wrote:
> On Mon, May 6, 2024 at 11:52 PM Bang Li <libang.li@antgroup.com> wrote:
>>
>> After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
>> multi-size THP"), it may need to batch update tlb of an address range
>> through the update_mmu_tlb function. We can simplify this operation by
>> adding the update_mmu_tlb_range function, which may also reduce the
>> execution of some unnecessary code in some architectures.
>>
>> Signed-off-by: Bang Li <libang.li@antgroup.com>
>> ---
>>   include/linux/pgtable.h | 8 ++++++++
>>   mm/memory.c             | 4 +---
>>   2 files changed, 9 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>> index 18019f037bae..869bfe6054f1 100644
>> --- a/include/linux/pgtable.h
>> +++ b/include/linux/pgtable.h
>> @@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
>>   #define __HAVE_ARCH_UPDATE_MMU_TLB
>>   #endif
>>
>> +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> 
> IIRC, the contemporary practice is to define a macro with the same name
> as the function if it is being overridden.

The macro __HAVE_ARCH_UPDATE_MMU_TLB_RANGE defined here is aligned with
the macro __HAVE_ARCH_UPDATE_MMU_TLB corresponding to the update_mmu_tlb
function. IMO, it should be better to use my method in this case.

Thanks,
Bang

> 
> Thanks,
> Lance
> 
>> +static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
>> +                               unsigned long address, pte_t *ptep, unsigned int nr)
>> +{
>> +}
>> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
>> +#endif
>> +
>>   /*
>>    * Some architectures may be able to avoid expensive synchronization
>>    * primitives when modifications are made to PTE's which are already
>> diff --git a/mm/memory.c b/mm/memory.c
>> index eea6e4984eae..2d53e29cf76e 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>>          vm_fault_t ret = 0;
>>          int nr_pages = 1;
>>          pte_t entry;
>> -       int i;
>>
>>          /* File mapping without ->vm_ops ? */
>>          if (vma->vm_flags & VM_SHARED)
>> @@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>>                  update_mmu_tlb(vma, addr, vmf->pte);
>>                  goto release;
>>          } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
>> -               for (i = 0; i < nr_pages; i++)
>> -                       update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
>> +               update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
>>                  goto release;
>>          }
>>
>> --
>> 2.19.1.6.gb485710b
>>

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 3/5] riscv: Add update_mmu_tlb_range()
  2024-05-06 15:51   ` Bang Li
@ 2024-05-07  5:35     ` Alexandre Ghiti
  -1 siblings, 0 replies; 26+ messages in thread
From: Alexandre Ghiti @ 2024-05-07  5:35 UTC (permalink / raw)
  To: Bang Li, akpm, chenhuacai, tsbogend, paul.walmsley, palmer,
	chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux

Hi Bang,

On 06/05/2024 17:51, Bang Li wrote:
> Added update_mmu_tlb_range function, we can batch update tlb of an
> address range.
>
> Signed-off-by: Bang Li <libang.li@antgroup.com>
> ---
>   arch/riscv/include/asm/pgtable.h | 4 ++++
>   1 file changed, 4 insertions(+)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 661b2b4fe758..f784c6dd2c66 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -489,6 +489,10 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
>   #define __HAVE_ARCH_UPDATE_MMU_TLB
>   #define update_mmu_tlb update_mmu_cache
>   
> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> +#define update_mmu_tlb_range(vma, addr, ptep, nr) \
> +	update_mmu_cache_range(NULL, vma, addr, ptep, nr)
> +
>   static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
>   		unsigned long address, pmd_t *pmdp)
>   {


You can add:

Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com>

Thanks,

Alex


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 3/5] riscv: Add update_mmu_tlb_range()
@ 2024-05-07  5:35     ` Alexandre Ghiti
  0 siblings, 0 replies; 26+ messages in thread
From: Alexandre Ghiti @ 2024-05-07  5:35 UTC (permalink / raw)
  To: Bang Li, akpm, chenhuacai, tsbogend, paul.walmsley, palmer,
	chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux

Hi Bang,

On 06/05/2024 17:51, Bang Li wrote:
> Added update_mmu_tlb_range function, we can batch update tlb of an
> address range.
>
> Signed-off-by: Bang Li <libang.li@antgroup.com>
> ---
>   arch/riscv/include/asm/pgtable.h | 4 ++++
>   1 file changed, 4 insertions(+)
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 661b2b4fe758..f784c6dd2c66 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -489,6 +489,10 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
>   #define __HAVE_ARCH_UPDATE_MMU_TLB
>   #define update_mmu_tlb update_mmu_cache
>   
> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> +#define update_mmu_tlb_range(vma, addr, ptep, nr) \
> +	update_mmu_cache_range(NULL, vma, addr, ptep, nr)
> +
>   static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
>   		unsigned long address, pmd_t *pmdp)
>   {


You can add:

Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com>

Thanks,

Alex


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 3/5] riscv: Add update_mmu_tlb_range()
  2024-05-07  5:35     ` Alexandre Ghiti
@ 2024-05-10  2:18       ` Bang Li
  -1 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-10  2:18 UTC (permalink / raw)
  To: Alexandre Ghiti, akpm, chenhuacai, tsbogend, paul.walmsley,
	palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux

Thanks, Alex!

On 2024/5/7 13:35, Alexandre Ghiti wrote:
> Hi Bang,
> 
> On 06/05/2024 17:51, Bang Li wrote:
>> Added update_mmu_tlb_range function, we can batch update tlb of an
>> address range.
>>
>> Signed-off-by: Bang Li <libang.li@antgroup.com>
>> ---
>>   arch/riscv/include/asm/pgtable.h | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/arch/riscv/include/asm/pgtable.h 
>> b/arch/riscv/include/asm/pgtable.h
>> index 661b2b4fe758..f784c6dd2c66 100644
>> --- a/arch/riscv/include/asm/pgtable.h
>> +++ b/arch/riscv/include/asm/pgtable.h
>> @@ -489,6 +489,10 @@ static inline void update_mmu_cache_range(struct 
>> vm_fault *vmf,
>>   #define __HAVE_ARCH_UPDATE_MMU_TLB
>>   #define update_mmu_tlb update_mmu_cache
>> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
>> +#define update_mmu_tlb_range(vma, addr, ptep, nr) \
>> +    update_mmu_cache_range(NULL, vma, addr, ptep, nr)
>> +
>>   static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
>>           unsigned long address, pmd_t *pmdp)
>>   {
> 
> 
> You can add:
> 
> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com>
> 
> Thanks,
> 
> Alex

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 3/5] riscv: Add update_mmu_tlb_range()
@ 2024-05-10  2:18       ` Bang Li
  0 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-10  2:18 UTC (permalink / raw)
  To: Alexandre Ghiti, akpm, chenhuacai, tsbogend, paul.walmsley,
	palmer, chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david,
	ryan.roberts, ioworker0, libang.linux

Thanks, Alex!

On 2024/5/7 13:35, Alexandre Ghiti wrote:
> Hi Bang,
> 
> On 06/05/2024 17:51, Bang Li wrote:
>> Added update_mmu_tlb_range function, we can batch update tlb of an
>> address range.
>>
>> Signed-off-by: Bang Li <libang.li@antgroup.com>
>> ---
>>   arch/riscv/include/asm/pgtable.h | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/arch/riscv/include/asm/pgtable.h 
>> b/arch/riscv/include/asm/pgtable.h
>> index 661b2b4fe758..f784c6dd2c66 100644
>> --- a/arch/riscv/include/asm/pgtable.h
>> +++ b/arch/riscv/include/asm/pgtable.h
>> @@ -489,6 +489,10 @@ static inline void update_mmu_cache_range(struct 
>> vm_fault *vmf,
>>   #define __HAVE_ARCH_UPDATE_MMU_TLB
>>   #define update_mmu_tlb update_mmu_cache
>> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
>> +#define update_mmu_tlb_range(vma, addr, ptep, nr) \
>> +    update_mmu_cache_range(NULL, vma, addr, ptep, nr)
>> +
>>   static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
>>           unsigned long address, pmd_t *pmdp)
>>   {
> 
> 
> You can add:
> 
> Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com>
> 
> Thanks,
> 
> Alex

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
  2024-05-06 15:51   ` Bang Li
@ 2024-05-10  9:05     ` Ryan Roberts
  -1 siblings, 0 replies; 26+ messages in thread
From: Ryan Roberts @ 2024-05-10  9:05 UTC (permalink / raw)
  To: Bang Li, akpm, chenhuacai, tsbogend, paul.walmsley, palmer,
	chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david, ioworker0,
	libang.linux

On 06/05/2024 16:51, Bang Li wrote:
> After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
> multi-size THP"), it may need to batch update tlb of an address range
> through the update_mmu_tlb function. We can simplify this operation by
> adding the update_mmu_tlb_range function, which may also reduce the
> execution of some unnecessary code in some architectures.
> 
> Signed-off-by: Bang Li <libang.li@antgroup.com>
> ---
>  include/linux/pgtable.h | 8 ++++++++
>  mm/memory.c             | 4 +---
>  2 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 18019f037bae..869bfe6054f1 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
>  #define __HAVE_ARCH_UPDATE_MMU_TLB
>  #endif

Given you are implementing update_mmu_tlb_range() in all the arches that
currently override update_mmu_tlb() I wonder if it would be cleaner to remove
update_mmu_tlb() from all those arches, and define generically, removing the
ability for arches to override it:

static inline void update_mmu_tlb(struct vm_area_struct *vma,
				unsigned long address, pte_t *ptep)
{
	update_mmu_tlb_range(vma, address, ptep, 1);
}

>  
> +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> +static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
> +				unsigned long address, pte_t *ptep, unsigned int nr)
> +{
> +}
> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> +#endif

Then you could use the modern override scheme as Lance suggested and you won't
have any confusion with __HAVE_ARCH_UPDATE_MMU_TLB because it won't exist anymore.

> +
>  /*
>   * Some architectures may be able to avoid expensive synchronization
>   * primitives when modifications are made to PTE's which are already
> diff --git a/mm/memory.c b/mm/memory.c
> index eea6e4984eae..2d53e29cf76e 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>  	vm_fault_t ret = 0;
>  	int nr_pages = 1;
>  	pte_t entry;
> -	int i;
>  
>  	/* File mapping without ->vm_ops ? */
>  	if (vma->vm_flags & VM_SHARED)
> @@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>  		update_mmu_tlb(vma, addr, vmf->pte);
>  		goto release;
>  	} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> -		for (i = 0; i < nr_pages; i++)
> -			update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
> +		update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);

I certainly agree that this will be a useful helper to have. I expect there will
be more users in future.

>  		goto release;
>  	}
>  


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
@ 2024-05-10  9:05     ` Ryan Roberts
  0 siblings, 0 replies; 26+ messages in thread
From: Ryan Roberts @ 2024-05-10  9:05 UTC (permalink / raw)
  To: Bang Li, akpm, chenhuacai, tsbogend, paul.walmsley, palmer,
	chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david, ioworker0,
	libang.linux

On 06/05/2024 16:51, Bang Li wrote:
> After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
> multi-size THP"), it may need to batch update tlb of an address range
> through the update_mmu_tlb function. We can simplify this operation by
> adding the update_mmu_tlb_range function, which may also reduce the
> execution of some unnecessary code in some architectures.
> 
> Signed-off-by: Bang Li <libang.li@antgroup.com>
> ---
>  include/linux/pgtable.h | 8 ++++++++
>  mm/memory.c             | 4 +---
>  2 files changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 18019f037bae..869bfe6054f1 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
>  #define __HAVE_ARCH_UPDATE_MMU_TLB
>  #endif

Given you are implementing update_mmu_tlb_range() in all the arches that
currently override update_mmu_tlb() I wonder if it would be cleaner to remove
update_mmu_tlb() from all those arches, and define generically, removing the
ability for arches to override it:

static inline void update_mmu_tlb(struct vm_area_struct *vma,
				unsigned long address, pte_t *ptep)
{
	update_mmu_tlb_range(vma, address, ptep, 1);
}

>  
> +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> +static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
> +				unsigned long address, pte_t *ptep, unsigned int nr)
> +{
> +}
> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> +#endif

Then you could use the modern override scheme as Lance suggested and you won't
have any confusion with __HAVE_ARCH_UPDATE_MMU_TLB because it won't exist anymore.

> +
>  /*
>   * Some architectures may be able to avoid expensive synchronization
>   * primitives when modifications are made to PTE's which are already
> diff --git a/mm/memory.c b/mm/memory.c
> index eea6e4984eae..2d53e29cf76e 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>  	vm_fault_t ret = 0;
>  	int nr_pages = 1;
>  	pte_t entry;
> -	int i;
>  
>  	/* File mapping without ->vm_ops ? */
>  	if (vma->vm_flags & VM_SHARED)
> @@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>  		update_mmu_tlb(vma, addr, vmf->pte);
>  		goto release;
>  	} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> -		for (i = 0; i < nr_pages; i++)
> -			update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
> +		update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);

I certainly agree that this will be a useful helper to have. I expect there will
be more users in future.

>  		goto release;
>  	}
>  


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
  2024-05-10  9:05     ` Ryan Roberts
@ 2024-05-10  9:19       ` Lance Yang
  -1 siblings, 0 replies; 26+ messages in thread
From: Lance Yang @ 2024-05-10  9:19 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Bang Li, akpm, chenhuacai, tsbogend, paul.walmsley, palmer,
	chris, jcmvbkbc, linux-kernel, linux-mm, loongarch, linux-riscv,
	david, libang.linux

On Fri, May 10, 2024 at 5:05 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 06/05/2024 16:51, Bang Li wrote:
> > After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
> > multi-size THP"), it may need to batch update tlb of an address range
> > through the update_mmu_tlb function. We can simplify this operation by
> > adding the update_mmu_tlb_range function, which may also reduce the
> > execution of some unnecessary code in some architectures.
> >
> > Signed-off-by: Bang Li <libang.li@antgroup.com>
> > ---
> >  include/linux/pgtable.h | 8 ++++++++
> >  mm/memory.c             | 4 +---
> >  2 files changed, 9 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 18019f037bae..869bfe6054f1 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
> >  #define __HAVE_ARCH_UPDATE_MMU_TLB
> >  #endif
>
> Given you are implementing update_mmu_tlb_range() in all the arches that
> currently override update_mmu_tlb() I wonder if it would be cleaner to remove
> update_mmu_tlb() from all those arches, and define generically, removing the
> ability for arches to override it:

Sounds great! Let's get it done.

>
> static inline void update_mmu_tlb(struct vm_area_struct *vma,
>                                 unsigned long address, pte_t *ptep)
> {
>         update_mmu_tlb_range(vma, address, ptep, 1);
> }
>
> >
> > +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> > +static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
> > +                             unsigned long address, pte_t *ptep, unsigned int nr)
> > +{
> > +}
> > +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> > +#endif
>
> Then you could use the modern override scheme as Lance suggested and you won't
> have any confusion with __HAVE_ARCH_UPDATE_MMU_TLB because it won't exist anymore.

+1. It might be better to use the modern override scheme :)

Thanks,
Lance

>
> > +
> >  /*
> >   * Some architectures may be able to avoid expensive synchronization
> >   * primitives when modifications are made to PTE's which are already
> > diff --git a/mm/memory.c b/mm/memory.c
> > index eea6e4984eae..2d53e29cf76e 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> >       vm_fault_t ret = 0;
> >       int nr_pages = 1;
> >       pte_t entry;
> > -     int i;
> >
> >       /* File mapping without ->vm_ops ? */
> >       if (vma->vm_flags & VM_SHARED)
> > @@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> >               update_mmu_tlb(vma, addr, vmf->pte);
> >               goto release;
> >       } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> > -             for (i = 0; i < nr_pages; i++)
> > -                     update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
> > +             update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
>
> I certainly agree that this will be a useful helper to have. I expect there will
> be more users in future.
>
> >               goto release;
> >       }
> >
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
@ 2024-05-10  9:19       ` Lance Yang
  0 siblings, 0 replies; 26+ messages in thread
From: Lance Yang @ 2024-05-10  9:19 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Bang Li, akpm, chenhuacai, tsbogend, paul.walmsley, palmer,
	chris, jcmvbkbc, linux-kernel, linux-mm, loongarch, linux-riscv,
	david, libang.linux

On Fri, May 10, 2024 at 5:05 PM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> On 06/05/2024 16:51, Bang Li wrote:
> > After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
> > multi-size THP"), it may need to batch update tlb of an address range
> > through the update_mmu_tlb function. We can simplify this operation by
> > adding the update_mmu_tlb_range function, which may also reduce the
> > execution of some unnecessary code in some architectures.
> >
> > Signed-off-by: Bang Li <libang.li@antgroup.com>
> > ---
> >  include/linux/pgtable.h | 8 ++++++++
> >  mm/memory.c             | 4 +---
> >  2 files changed, 9 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 18019f037bae..869bfe6054f1 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
> >  #define __HAVE_ARCH_UPDATE_MMU_TLB
> >  #endif
>
> Given you are implementing update_mmu_tlb_range() in all the arches that
> currently override update_mmu_tlb() I wonder if it would be cleaner to remove
> update_mmu_tlb() from all those arches, and define generically, removing the
> ability for arches to override it:

Sounds great! Let's get it done.

>
> static inline void update_mmu_tlb(struct vm_area_struct *vma,
>                                 unsigned long address, pte_t *ptep)
> {
>         update_mmu_tlb_range(vma, address, ptep, 1);
> }
>
> >
> > +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> > +static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
> > +                             unsigned long address, pte_t *ptep, unsigned int nr)
> > +{
> > +}
> > +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
> > +#endif
>
> Then you could use the modern override scheme as Lance suggested and you won't
> have any confusion with __HAVE_ARCH_UPDATE_MMU_TLB because it won't exist anymore.

+1. It might be better to use the modern override scheme :)

Thanks,
Lance

>
> > +
> >  /*
> >   * Some architectures may be able to avoid expensive synchronization
> >   * primitives when modifications are made to PTE's which are already
> > diff --git a/mm/memory.c b/mm/memory.c
> > index eea6e4984eae..2d53e29cf76e 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> >       vm_fault_t ret = 0;
> >       int nr_pages = 1;
> >       pte_t entry;
> > -     int i;
> >
> >       /* File mapping without ->vm_ops ? */
> >       if (vma->vm_flags & VM_SHARED)
> > @@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
> >               update_mmu_tlb(vma, addr, vmf->pte);
> >               goto release;
> >       } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
> > -             for (i = 0; i < nr_pages; i++)
> > -                     update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
> > +             update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
>
> I certainly agree that this will be a useful helper to have. I expect there will
> be more users in future.
>
> >               goto release;
> >       }
> >
>

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
  2024-05-10  9:05     ` Ryan Roberts
@ 2024-05-10 16:36       ` Bang Li
  -1 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-10 16:36 UTC (permalink / raw)
  To: Ryan Roberts, akpm, chenhuacai, tsbogend, paul.walmsley, palmer,
	chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david, ioworker0,
	libang.linux, baolin.wang

Hi Ryan,

Thanks for you review!

On 2024/5/10 17:05, Ryan Roberts wrote:
> On 06/05/2024 16:51, Bang Li wrote:
>> After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
>> multi-size THP"), it may need to batch update tlb of an address range
>> through the update_mmu_tlb function. We can simplify this operation by
>> adding the update_mmu_tlb_range function, which may also reduce the
>> execution of some unnecessary code in some architectures.
>>
>> Signed-off-by: Bang Li <libang.li@antgroup.com>
>> ---
>>   include/linux/pgtable.h | 8 ++++++++
>>   mm/memory.c             | 4 +---
>>   2 files changed, 9 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>> index 18019f037bae..869bfe6054f1 100644
>> --- a/include/linux/pgtable.h
>> +++ b/include/linux/pgtable.h
>> @@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
>>   #define __HAVE_ARCH_UPDATE_MMU_TLB
>>   #endif
> 
> Given you are implementing update_mmu_tlb_range() in all the arches that
> currently override update_mmu_tlb() I wonder if it would be cleaner to remove
> update_mmu_tlb() from all those arches, and define generically, removing the
> ability for arches to override it:
> 
> static inline void update_mmu_tlb(struct vm_area_struct *vma,
> 				unsigned long address, pte_t *ptep)
> {
> 	update_mmu_tlb_range(vma, address, ptep, 1);
> }

Agreed! Thank you for your suggestion, I will modify it in the next version.

> 
>>   
>> +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
>> +static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
>> +				unsigned long address, pte_t *ptep, unsigned int nr)
>> +{
>> +}
>> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
>> +#endif
> 
> Then you could use the modern override scheme as Lance suggested and you won't
> have any confusion with __HAVE_ARCH_UPDATE_MMU_TLB because it won't exist anymore.

Yes, use update_mmu_tlb_range to implement update_mmu_tlb, we only need 
to define the update_mmu_tlb_range macro.

> 
>> +
>>   /*
>>    * Some architectures may be able to avoid expensive synchronization
>>    * primitives when modifications are made to PTE's which are already
>> diff --git a/mm/memory.c b/mm/memory.c
>> index eea6e4984eae..2d53e29cf76e 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>>   	vm_fault_t ret = 0;
>>   	int nr_pages = 1;
>>   	pte_t entry;
>> -	int i;
>>   
>>   	/* File mapping without ->vm_ops ? */
>>   	if (vma->vm_flags & VM_SHARED)
>> @@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>>   		update_mmu_tlb(vma, addr, vmf->pte);
>>   		goto release;
>>   	} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
>> -		for (i = 0; i < nr_pages; i++)
>> -			update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
>> +		update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
> 
> I certainly agree that this will be a useful helper to have. I expect there will
> be more users in future.

Thank you for your affirmation. Baolin’s "add mTHP support for anonymous 
shmem" series[1] can also use this function to simplify the code.

[1] 
https://lore.kernel.org/linux-mm/cover.1714978902.git.baolin.wang@linux.alibaba.com/

Thanks,
Bang

> 
>>   		goto release;
>>   	}
>>   

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 5/5] mm: Add update_mmu_tlb_range()
@ 2024-05-10 16:36       ` Bang Li
  0 siblings, 0 replies; 26+ messages in thread
From: Bang Li @ 2024-05-10 16:36 UTC (permalink / raw)
  To: Ryan Roberts, akpm, chenhuacai, tsbogend, paul.walmsley, palmer,
	chris, jcmvbkbc
  Cc: linux-kernel, linux-mm, loongarch, linux-riscv, david, ioworker0,
	libang.linux, baolin.wang

Hi Ryan,

Thanks for you review!

On 2024/5/10 17:05, Ryan Roberts wrote:
> On 06/05/2024 16:51, Bang Li wrote:
>> After the commit 19eaf44954df ("mm: thp: support allocation of anonymous
>> multi-size THP"), it may need to batch update tlb of an address range
>> through the update_mmu_tlb function. We can simplify this operation by
>> adding the update_mmu_tlb_range function, which may also reduce the
>> execution of some unnecessary code in some architectures.
>>
>> Signed-off-by: Bang Li <libang.li@antgroup.com>
>> ---
>>   include/linux/pgtable.h | 8 ++++++++
>>   mm/memory.c             | 4 +---
>>   2 files changed, 9 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>> index 18019f037bae..869bfe6054f1 100644
>> --- a/include/linux/pgtable.h
>> +++ b/include/linux/pgtable.h
>> @@ -737,6 +737,14 @@ static inline void update_mmu_tlb(struct vm_area_struct *vma,
>>   #define __HAVE_ARCH_UPDATE_MMU_TLB
>>   #endif
> 
> Given you are implementing update_mmu_tlb_range() in all the arches that
> currently override update_mmu_tlb() I wonder if it would be cleaner to remove
> update_mmu_tlb() from all those arches, and define generically, removing the
> ability for arches to override it:
> 
> static inline void update_mmu_tlb(struct vm_area_struct *vma,
> 				unsigned long address, pte_t *ptep)
> {
> 	update_mmu_tlb_range(vma, address, ptep, 1);
> }

Agreed! Thank you for your suggestion, I will modify it in the next version.

> 
>>   
>> +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
>> +static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
>> +				unsigned long address, pte_t *ptep, unsigned int nr)
>> +{
>> +}
>> +#define __HAVE_ARCH_UPDATE_MMU_TLB_RANGE
>> +#endif
> 
> Then you could use the modern override scheme as Lance suggested and you won't
> have any confusion with __HAVE_ARCH_UPDATE_MMU_TLB because it won't exist anymore.

Yes, use update_mmu_tlb_range to implement update_mmu_tlb, we only need 
to define the update_mmu_tlb_range macro.

> 
>> +
>>   /*
>>    * Some architectures may be able to avoid expensive synchronization
>>    * primitives when modifications are made to PTE's which are already
>> diff --git a/mm/memory.c b/mm/memory.c
>> index eea6e4984eae..2d53e29cf76e 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4421,7 +4421,6 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>>   	vm_fault_t ret = 0;
>>   	int nr_pages = 1;
>>   	pte_t entry;
>> -	int i;
>>   
>>   	/* File mapping without ->vm_ops ? */
>>   	if (vma->vm_flags & VM_SHARED)
>> @@ -4491,8 +4490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>>   		update_mmu_tlb(vma, addr, vmf->pte);
>>   		goto release;
>>   	} else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
>> -		for (i = 0; i < nr_pages; i++)
>> -			update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i);
>> +		update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
> 
> I certainly agree that this will be a useful helper to have. I expect there will
> be more users in future.

Thank you for your affirmation. Baolin’s "add mTHP support for anonymous 
shmem" series[1] can also use this function to simplify the code.

[1] 
https://lore.kernel.org/linux-mm/cover.1714978902.git.baolin.wang@linux.alibaba.com/

Thanks,
Bang

> 
>>   		goto release;
>>   	}
>>   

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2024-05-10 16:42 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-06 15:51 [PATCH v2 0/5] Add update_mmu_tlb_range() to simplify code Bang Li
2024-05-06 15:51 ` Bang Li
2024-05-06 15:51 ` [PATCH v2 1/5] LoongArch: Add update_mmu_tlb_range() Bang Li
2024-05-06 15:51   ` Bang Li
2024-05-06 15:51 ` [PATCH v2 2/5] mips: " Bang Li
2024-05-06 15:51   ` Bang Li
2024-05-06 15:51 ` [PATCH v2 3/5] riscv: " Bang Li
2024-05-06 15:51   ` Bang Li
2024-05-07  5:35   ` Alexandre Ghiti
2024-05-07  5:35     ` Alexandre Ghiti
2024-05-10  2:18     ` Bang Li
2024-05-10  2:18       ` Bang Li
2024-05-06 15:51 ` [PATCH v2 4/5] xtensa: " Bang Li
2024-05-06 15:51   ` Bang Li
2024-05-06 15:51 ` [PATCH v2 5/5] mm: " Bang Li
2024-05-06 15:51   ` Bang Li
2024-05-06 16:07   ` Lance Yang
2024-05-06 16:07     ` Lance Yang
2024-05-07  3:26     ` Bang Li
2024-05-07  3:26       ` Bang Li
2024-05-10  9:05   ` Ryan Roberts
2024-05-10  9:05     ` Ryan Roberts
2024-05-10  9:19     ` Lance Yang
2024-05-10  9:19       ` Lance Yang
2024-05-10 16:36     ` Bang Li
2024-05-10 16:36       ` Bang Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.